Test Report: QEMU_macOS 17174

                    
                      7689d73509a567ada6f3653fa0ef2156acc9a338:2023-09-06:30902
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.94
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.9
22 TestAddons/Setup 44.54
23 TestCertOptions 10.1
24 TestCertExpiration 195.25
25 TestDockerFlags 9.97
26 TestForceSystemdFlag 11.69
27 TestForceSystemdEnv 10.13
72 TestFunctional/parallel/ServiceCmdConnect 32.66
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.18
139 TestImageBuild/serial/BuildWithBuildArg 1.04
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.63
183 TestMountStart/serial/StartWithMountFirst 10.35
186 TestMultiNode/serial/FreshStart2Nodes 9.9
187 TestMultiNode/serial/DeployApp2Nodes 69.16
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.1
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.36
195 TestMultiNode/serial/DeleteNode 0.1
196 TestMultiNode/serial/StopMultiNode 0.15
197 TestMultiNode/serial/RestartMultiNode 5.24
198 TestMultiNode/serial/ValidateNameConflict 19.84
202 TestPreload 9.85
204 TestScheduledStopUnix 10.05
205 TestSkaffold 11.89
208 TestRunningBinaryUpgrade 146.01
210 TestKubernetesUpgrade 15.28
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.45
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.14
225 TestStoppedBinaryUpgrade/Setup 142.44
227 TestPause/serial/Start 9.77
237 TestNoKubernetes/serial/StartWithK8s 9.73
238 TestNoKubernetes/serial/StartWithStopK8s 5.32
239 TestNoKubernetes/serial/Start 5.32
243 TestNoKubernetes/serial/StartNoArgs 5.31
245 TestNetworkPlugins/group/kindnet/Start 9.79
246 TestNetworkPlugins/group/auto/Start 9.73
247 TestNetworkPlugins/group/flannel/Start 9.76
248 TestNetworkPlugins/group/enable-default-cni/Start 9.73
249 TestNetworkPlugins/group/bridge/Start 9.7
250 TestNetworkPlugins/group/kubenet/Start 9.73
251 TestNetworkPlugins/group/custom-flannel/Start 9.69
252 TestNetworkPlugins/group/calico/Start 10.69
253 TestStoppedBinaryUpgrade/Upgrade 1.48
254 TestStoppedBinaryUpgrade/MinikubeLogs 0.11
255 TestNetworkPlugins/group/false/Start 11.69
257 TestStartStop/group/old-k8s-version/serial/FirstStart 11.75
259 TestStartStop/group/no-preload/serial/FirstStart 9.88
260 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
261 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
264 TestStartStop/group/old-k8s-version/serial/SecondStart 5.22
265 TestStartStop/group/no-preload/serial/DeployApp 0.09
266 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
269 TestStartStop/group/no-preload/serial/SecondStart 5.25
270 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
271 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
272 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
273 TestStartStop/group/old-k8s-version/serial/Pause 0.1
275 TestStartStop/group/embed-certs/serial/FirstStart 9.98
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
277 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
278 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
279 TestStartStop/group/no-preload/serial/Pause 0.1
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10
282 TestStartStop/group/embed-certs/serial/DeployApp 0.09
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/embed-certs/serial/SecondStart 5.21
287 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
291 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
294 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
295 TestStartStop/group/embed-certs/serial/Pause 0.11
297 TestStartStop/group/newest-cni/serial/FirstStart 9.89
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
306 TestStartStop/group/newest-cni/serial/SecondStart 5.25
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (13.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-830000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-830000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.943599292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fde55db2-1642-4c0b-a315-2842d20e928c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-830000] minikube v1.31.2 on Darwin 13.5.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"11f25979-2abc-43f4-96ea-d2162bca8a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17174"}}
	{"specversion":"1.0","id":"dcd77e5f-5482-4af2-80ca-9e1da04fea27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig"}}
	{"specversion":"1.0","id":"04151918-f149-4d8d-b7a8-4098a887b0d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"cac8e44f-1481-4c67-a909-46de2a556edb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14968970-cf55-4ea9-a45c-0e040be2c8bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube"}}
	{"specversion":"1.0","id":"7ddc24c9-f940-4497-9309-442db2b19a54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"ca8173e3-0d08-45fc-ae87-b9b3da30bf92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"30464d16-3da2-4e19-8fb6-f703dae82362","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"3bba2be5-2641-48e9-970a-08da481b7ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7869a083-c9c6-498b-bbaa-5bd89695139d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-830000 in cluster download-only-830000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"60d6eb0c-40f5-4419-a00a-7c96ee1203f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"da50eba9-f19a-42e8-82a1-157a3693d923","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68] Decompressors:map[bz2:0x14000057de0 gz:0x14000057de8 tar:0x14000057d90 tar.bz2:0x14000057da0 tar.gz:0x14000057db0 tar.xz:0x14000057dc0 tar.zst:0x14000057dd0 tbz2:0x14000057da0 tgz:0x1400005
7db0 txz:0x14000057dc0 tzst:0x14000057dd0 xz:0x14000057df0 zip:0x14000057e00 zst:0x14000057df8] Getters:map[file:0x140003f45b0 http:0x14000b04140 https:0x14000b04190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"dd2ba24c-5d98-4bd0-aa1e-9d22f3e1c815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:36:48.822815    1399 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:36:48.822968    1399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:36:48.822971    1399 out.go:309] Setting ErrFile to fd 2...
	I0906 16:36:48.822973    1399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:36:48.823084    1399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	W0906 16:36:48.823158    1399 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17174-979/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17174-979/.minikube/config/config.json: no such file or directory
	I0906 16:36:48.824254    1399 out.go:303] Setting JSON to true
	I0906 16:36:48.840743    1399 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":382,"bootTime":1694043026,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:36:48.840795    1399 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:36:48.846205    1399 out.go:97] [download-only-830000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:36:48.849256    1399 out.go:169] MINIKUBE_LOCATION=17174
	I0906 16:36:48.846365    1399 notify.go:220] Checking for updates...
	W0906 16:36:48.846355    1399 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 16:36:48.855148    1399 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:36:48.858180    1399 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:36:48.861209    1399 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:36:48.864128    1399 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	W0906 16:36:48.870232    1399 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 16:36:48.870517    1399 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:36:48.873328    1399 out.go:97] Using the qemu2 driver based on user configuration
	I0906 16:36:48.873333    1399 start.go:298] selected driver: qemu2
	I0906 16:36:48.873335    1399 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:36:48.873388    1399 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:36:48.877191    1399 out.go:169] Automatically selected the socket_vmnet network
	I0906 16:36:48.882647    1399 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 16:36:48.882738    1399 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 16:36:48.882806    1399 cni.go:84] Creating CNI manager for ""
	I0906 16:36:48.882821    1399 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:36:48.882827    1399 start_flags.go:321] config:
	{Name:download-only-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:36:48.888416    1399 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:36:48.892225    1399 out.go:97] Downloading VM boot image ...
	I0906 16:36:48.892255    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0906 16:36:53.024730    1399 out.go:97] Starting control plane node download-only-830000 in cluster download-only-830000
	I0906 16:36:53.024750    1399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:36:53.080086    1399 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 16:36:53.080161    1399 cache.go:57] Caching tarball of preloaded images
	I0906 16:36:53.080326    1399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:36:53.085451    1399 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0906 16:36:53.085458    1399 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:36:53.167439    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 16:37:01.626139    1399 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:37:01.626280    1399 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:37:02.268529    1399 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 16:37:02.268723    1399 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/download-only-830000/config.json ...
	I0906 16:37:02.268741    1399 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/download-only-830000/config.json: {Name:mk90bc48de1e752792895fdcb60d4de4be53d699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:02.268948    1399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:37:02.269112    1399 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0906 16:37:02.694579    1399 out.go:169] 
	W0906 16:37:02.699494    1399 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68] Decompressors:map[bz2:0x14000057de0 gz:0x14000057de8 tar:0x14000057d90 tar.bz2:0x14000057da0 tar.gz:0x14000057db0 tar.xz:0x14000057dc0 tar.zst:0x14000057dd0 tbz2:0x14000057da0 tgz:0x14000057db0 txz:0x14000057dc0 tzst:0x14000057dd0 xz:0x14000057df0 zip:0x14000057e00 zst:0x14000057df8] Getters:map[file:0x140003f45b0 http:0x14000b04140 https:0x14000b04190] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0906 16:37:02.699523    1399 out_reason.go:110] 
	W0906 16:37:02.706488    1399 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:37:02.710477    1399 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-830000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (13.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-695000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-695000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.7624375s)

                                                
                                                
-- stdout --
	* [offline-docker-695000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-695000 in cluster offline-docker-695000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-695000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:49:22.970698    2838 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:49:22.970814    2838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:22.970818    2838 out.go:309] Setting ErrFile to fd 2...
	I0906 16:49:22.970820    2838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:22.970953    2838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:49:22.971924    2838 out.go:303] Setting JSON to false
	I0906 16:49:22.988831    2838 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1136,"bootTime":1694043026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:49:22.988894    2838 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:49:22.993911    2838 out.go:177] * [offline-docker-695000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:49:23.001873    2838 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:49:23.001903    2838 notify.go:220] Checking for updates...
	I0906 16:49:23.006903    2838 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:49:23.009827    2838 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:49:23.012742    2838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:49:23.015836    2838 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:49:23.018766    2838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:49:23.022097    2838 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:49:23.022145    2838 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:49:23.025763    2838 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:49:23.032757    2838 start.go:298] selected driver: qemu2
	I0906 16:49:23.032766    2838 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:49:23.032773    2838 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:49:23.034593    2838 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:49:23.037793    2838 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:49:23.040777    2838 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:49:23.040798    2838 cni.go:84] Creating CNI manager for ""
	I0906 16:49:23.040804    2838 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:49:23.040807    2838 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:49:23.040813    2838 start_flags.go:321] config:
	{Name:offline-docker-695000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-695000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:49:23.044945    2838 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:49:23.051582    2838 out.go:177] * Starting control plane node offline-docker-695000 in cluster offline-docker-695000
	I0906 16:49:23.055729    2838 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:49:23.055754    2838 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:49:23.055766    2838 cache.go:57] Caching tarball of preloaded images
	I0906 16:49:23.055838    2838 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:49:23.055844    2838 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:49:23.055899    2838 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/offline-docker-695000/config.json ...
	I0906 16:49:23.055911    2838 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/offline-docker-695000/config.json: {Name:mk5250b972fb1c2b68c6c430e7049dc5aa77e17c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:49:23.056143    2838 start.go:365] acquiring machines lock for offline-docker-695000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:23.056171    2838 start.go:369] acquired machines lock for "offline-docker-695000" in 21.542µs
	I0906 16:49:23.056180    2838 start.go:93] Provisioning new machine with config: &{Name:offline-docker-695000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-695000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:23.056205    2838 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:23.060683    2838 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:23.074560    2838 start.go:159] libmachine.API.Create for "offline-docker-695000" (driver="qemu2")
	I0906 16:49:23.074589    2838 client.go:168] LocalClient.Create starting
	I0906 16:49:23.074655    2838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:23.074682    2838 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:23.074695    2838 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:23.074737    2838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:23.074759    2838 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:23.074766    2838 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:23.075068    2838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:23.194815    2838 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:23.253689    2838 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:23.253702    2838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:23.253880    2838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2
	I0906 16:49:23.262970    2838 main.go:141] libmachine: STDOUT: 
	I0906 16:49:23.262986    2838 main.go:141] libmachine: STDERR: 
	I0906 16:49:23.263042    2838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2 +20000M
	I0906 16:49:23.274040    2838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:23.274068    2838 main.go:141] libmachine: STDERR: 
	I0906 16:49:23.274091    2838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2
	I0906 16:49:23.274105    2838 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:23.274159    2838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:2f:ef:b9:09:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2
	I0906 16:49:23.275778    2838 main.go:141] libmachine: STDOUT: 
	I0906 16:49:23.275791    2838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:23.275812    2838 client.go:171] LocalClient.Create took 201.219459ms
	I0906 16:49:25.277821    2838 start.go:128] duration metric: createHost completed in 2.221654875s
	I0906 16:49:25.277840    2838 start.go:83] releasing machines lock for "offline-docker-695000", held for 2.221710375s
	W0906 16:49:25.277852    2838 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:25.286486    2838 out.go:177] * Deleting "offline-docker-695000" in qemu2 ...
	W0906 16:49:25.293970    2838 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:25.293980    2838 start.go:687] Will try again in 5 seconds ...
	I0906 16:49:30.296138    2838 start.go:365] acquiring machines lock for offline-docker-695000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:30.296657    2838 start.go:369] acquired machines lock for "offline-docker-695000" in 388.583µs
	I0906 16:49:30.296790    2838 start.go:93] Provisioning new machine with config: &{Name:offline-docker-695000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-695000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:30.297079    2838 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:30.302740    2838 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:30.346954    2838 start.go:159] libmachine.API.Create for "offline-docker-695000" (driver="qemu2")
	I0906 16:49:30.346993    2838 client.go:168] LocalClient.Create starting
	I0906 16:49:30.347096    2838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:30.347151    2838 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:30.347170    2838 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:30.347239    2838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:30.347274    2838 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:30.347295    2838 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:30.347806    2838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:30.473723    2838 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:30.649650    2838 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:30.649656    2838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:30.649816    2838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2
	I0906 16:49:30.658616    2838 main.go:141] libmachine: STDOUT: 
	I0906 16:49:30.658635    2838 main.go:141] libmachine: STDERR: 
	I0906 16:49:30.658705    2838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2 +20000M
	I0906 16:49:30.666055    2838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:30.666069    2838 main.go:141] libmachine: STDERR: 
	I0906 16:49:30.666082    2838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2
	I0906 16:49:30.666087    2838 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:30.666130    2838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:09:89:49:97:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/offline-docker-695000/disk.qcow2
	I0906 16:49:30.667701    2838 main.go:141] libmachine: STDOUT: 
	I0906 16:49:30.667714    2838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:30.667726    2838 client.go:171] LocalClient.Create took 320.731541ms
	I0906 16:49:32.669771    2838 start.go:128] duration metric: createHost completed in 2.372726167s
	I0906 16:49:32.669796    2838 start.go:83] releasing machines lock for "offline-docker-695000", held for 2.373172458s
	W0906 16:49:32.669906    2838 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-695000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-695000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:32.679090    2838 out.go:177] 
	W0906 16:49:32.683202    2838 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:49:32.683207    2838 out.go:239] * 
	* 
	W0906 16:49:32.683695    2838 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:49:32.695117    2838 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-695000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-09-06 16:49:32.704485 -0700 PDT m=+763.977493959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-695000 -n offline-docker-695000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-695000 -n offline-docker-695000: exit status 7 (31.804375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-695000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-695000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-695000
--- FAIL: TestOffline (9.90s)

                                                
                                    
x
+
TestAddons/Setup (44.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-654000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-654000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (44.534865125s)

                                                
                                                
-- stdout --
	* [addons-654000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-654000 in cluster addons-654000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	* Verifying Kubernetes components...
	* Verifying registry addon...
	* Verifying ingress addon...

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:37:20.157124    1472 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:37:20.157254    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:37:20.157257    1472 out.go:309] Setting ErrFile to fd 2...
	I0906 16:37:20.157260    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:37:20.157377    1472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:37:20.158383    1472 out.go:303] Setting JSON to false
	I0906 16:37:20.173490    1472 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":414,"bootTime":1694043026,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:37:20.173552    1472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:37:20.178940    1472 out.go:177] * [addons-654000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:37:20.185979    1472 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:37:20.186080    1472 notify.go:220] Checking for updates...
	I0906 16:37:20.189955    1472 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:37:20.192920    1472 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:37:20.195846    1472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:37:20.198853    1472 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:37:20.201943    1472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:37:20.203362    1472 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:37:20.207938    1472 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:37:20.214757    1472 start.go:298] selected driver: qemu2
	I0906 16:37:20.214763    1472 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:37:20.214768    1472 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:37:20.216599    1472 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:37:20.219975    1472 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:37:20.223028    1472 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:37:20.223055    1472 cni.go:84] Creating CNI manager for ""
	I0906 16:37:20.223063    1472 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:37:20.223067    1472 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:37:20.223071    1472 start_flags.go:321] config:
	{Name:addons-654000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-654000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0906 16:37:20.227274    1472 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:37:20.234913    1472 out.go:177] * Starting control plane node addons-654000 in cluster addons-654000
	I0906 16:37:20.238899    1472 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:37:20.238918    1472 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:37:20.238934    1472 cache.go:57] Caching tarball of preloaded images
	I0906 16:37:20.238993    1472 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:37:20.238998    1472 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:37:20.239153    1472 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/config.json ...
	I0906 16:37:20.239165    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/config.json: {Name:mk88a705fb21b8666a848ad2654ec15f754107b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:20.239374    1472 start.go:365] acquiring machines lock for addons-654000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:37:20.239479    1472 start.go:369] acquired machines lock for "addons-654000" in 99.666µs
	I0906 16:37:20.239488    1472 start.go:93] Provisioning new machine with config: &{Name:addons-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-654000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:37:20.239520    1472 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:37:20.243911    1472 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 16:37:20.552413    1472 start.go:159] libmachine.API.Create for "addons-654000" (driver="qemu2")
	I0906 16:37:20.552457    1472 client.go:168] LocalClient.Create starting
	I0906 16:37:20.552603    1472 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:37:20.745166    1472 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:37:20.963805    1472 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:37:21.548160    1472 main.go:141] libmachine: Creating SSH key...
	I0906 16:37:21.616082    1472 main.go:141] libmachine: Creating Disk image...
	I0906 16:37:21.616087    1472 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:37:21.616266    1472 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/disk.qcow2
	I0906 16:37:21.650452    1472 main.go:141] libmachine: STDOUT: 
	I0906 16:37:21.650475    1472 main.go:141] libmachine: STDERR: 
	I0906 16:37:21.650537    1472 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/disk.qcow2 +20000M
	I0906 16:37:21.657906    1472 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:37:21.657918    1472 main.go:141] libmachine: STDERR: 
	I0906 16:37:21.657933    1472 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/disk.qcow2
	I0906 16:37:21.657941    1472 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:37:21.657986    1472 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:de:51:c7:76:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/disk.qcow2
	I0906 16:37:21.725205    1472 main.go:141] libmachine: STDOUT: 
	I0906 16:37:21.725237    1472 main.go:141] libmachine: STDERR: 
	I0906 16:37:21.725243    1472 main.go:141] libmachine: Attempt 0
	I0906 16:37:21.725260    1472 main.go:141] libmachine: Searching for 3e:de:51:c7:76:4e in /var/db/dhcpd_leases ...
	I0906 16:37:23.727387    1472 main.go:141] libmachine: Attempt 1
	I0906 16:37:23.727470    1472 main.go:141] libmachine: Searching for 3e:de:51:c7:76:4e in /var/db/dhcpd_leases ...
	I0906 16:37:25.729640    1472 main.go:141] libmachine: Attempt 2
	I0906 16:37:25.729677    1472 main.go:141] libmachine: Searching for 3e:de:51:c7:76:4e in /var/db/dhcpd_leases ...
	I0906 16:37:27.731717    1472 main.go:141] libmachine: Attempt 3
	I0906 16:37:27.731729    1472 main.go:141] libmachine: Searching for 3e:de:51:c7:76:4e in /var/db/dhcpd_leases ...
	I0906 16:37:29.733717    1472 main.go:141] libmachine: Attempt 4
	I0906 16:37:29.733728    1472 main.go:141] libmachine: Searching for 3e:de:51:c7:76:4e in /var/db/dhcpd_leases ...
	I0906 16:37:31.735700    1472 main.go:141] libmachine: Attempt 5
	I0906 16:37:31.735754    1472 main.go:141] libmachine: Searching for 3e:de:51:c7:76:4e in /var/db/dhcpd_leases ...
	I0906 16:37:33.737839    1472 main.go:141] libmachine: Attempt 6
	I0906 16:37:33.737861    1472 main.go:141] libmachine: Searching for 3e:de:51:c7:76:4e in /var/db/dhcpd_leases ...
	I0906 16:37:33.737987    1472 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0906 16:37:33.738028    1472 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:37:33.738033    1472 main.go:141] libmachine: Found match: 3e:de:51:c7:76:4e
	I0906 16:37:33.738044    1472 main.go:141] libmachine: IP: 192.168.105.2
	I0906 16:37:33.738049    1472 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0906 16:37:35.759717    1472 machine.go:88] provisioning docker machine ...
	I0906 16:37:35.759777    1472 buildroot.go:166] provisioning hostname "addons-654000"
	I0906 16:37:35.761278    1472 main.go:141] libmachine: Using SSH client type: native
	I0906 16:37:35.762109    1472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c0e3b0] 0x104c10e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 16:37:35.762128    1472 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-654000 && echo "addons-654000" | sudo tee /etc/hostname
	I0906 16:37:35.863462    1472 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-654000
	
	I0906 16:37:35.863622    1472 main.go:141] libmachine: Using SSH client type: native
	I0906 16:37:35.864055    1472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c0e3b0] 0x104c10e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 16:37:35.864069    1472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-654000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-654000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-654000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 16:37:35.945081    1472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 16:37:35.945097    1472 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17174-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17174-979/.minikube}
	I0906 16:37:35.945115    1472 buildroot.go:174] setting up certificates
	I0906 16:37:35.945123    1472 provision.go:83] configureAuth start
	I0906 16:37:35.945128    1472 provision.go:138] copyHostCerts
	I0906 16:37:35.945286    1472 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem (1123 bytes)
	I0906 16:37:35.945642    1472 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem (1679 bytes)
	I0906 16:37:35.945800    1472 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem (1082 bytes)
	I0906 16:37:35.945913    1472 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem org=jenkins.addons-654000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-654000]
	I0906 16:37:36.068691    1472 provision.go:172] copyRemoteCerts
	I0906 16:37:36.068763    1472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 16:37:36.068783    1472 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/id_rsa Username:docker}
	I0906 16:37:36.106876    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 16:37:36.113991    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 16:37:36.120922    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 16:37:36.128037    1472 provision.go:86] duration metric: configureAuth took 182.899ms
	I0906 16:37:36.128044    1472 buildroot.go:189] setting minikube options for container-runtime
	I0906 16:37:36.128143    1472 config.go:182] Loaded profile config "addons-654000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:37:36.128179    1472 main.go:141] libmachine: Using SSH client type: native
	I0906 16:37:36.128396    1472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c0e3b0] 0x104c10e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 16:37:36.128400    1472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 16:37:36.197653    1472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 16:37:36.197676    1472 buildroot.go:70] root file system type: tmpfs
	I0906 16:37:36.197732    1472 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 16:37:36.197771    1472 main.go:141] libmachine: Using SSH client type: native
	I0906 16:37:36.198014    1472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c0e3b0] 0x104c10e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 16:37:36.198050    1472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 16:37:36.272313    1472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 16:37:36.272359    1472 main.go:141] libmachine: Using SSH client type: native
	I0906 16:37:36.272625    1472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c0e3b0] 0x104c10e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 16:37:36.272635    1472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 16:37:36.603663    1472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 16:37:36.603676    1472 machine.go:91] provisioned docker machine in 843.946292ms
	I0906 16:37:36.603682    1472 client.go:171] LocalClient.Create took 16.051548208s
	I0906 16:37:36.603696    1472 start.go:167] duration metric: libmachine.API.Create for "addons-654000" took 16.051617291s
	I0906 16:37:36.603702    1472 start.go:300] post-start starting for "addons-654000" (driver="qemu2")
	I0906 16:37:36.603707    1472 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 16:37:36.603785    1472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 16:37:36.603795    1472 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/id_rsa Username:docker}
	I0906 16:37:36.641074    1472 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 16:37:36.642401    1472 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 16:37:36.642412    1472 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17174-979/.minikube/addons for local assets ...
	I0906 16:37:36.642489    1472 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17174-979/.minikube/files for local assets ...
	I0906 16:37:36.642520    1472 start.go:303] post-start completed in 38.815375ms
	I0906 16:37:36.642906    1472 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/config.json ...
	I0906 16:37:36.643063    1472 start.go:128] duration metric: createHost completed in 16.4038755s
	I0906 16:37:36.643089    1472 main.go:141] libmachine: Using SSH client type: native
	I0906 16:37:36.643314    1472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c0e3b0] 0x104c10e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0906 16:37:36.643318    1472 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 16:37:36.711459    1472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694043456.583328585
	
	I0906 16:37:36.711465    1472 fix.go:206] guest clock: 1694043456.583328585
	I0906 16:37:36.711469    1472 fix.go:219] Guest: 2023-09-06 16:37:36.583328585 -0700 PDT Remote: 2023-09-06 16:37:36.643066 -0700 PDT m=+16.504907001 (delta=-59.737415ms)
	I0906 16:37:36.711484    1472 fix.go:190] guest clock delta is within tolerance: -59.737415ms
	I0906 16:37:36.711486    1472 start.go:83] releasing machines lock for "addons-654000", held for 16.47233975s
	I0906 16:37:36.711750    1472 ssh_runner.go:195] Run: cat /version.json
	I0906 16:37:36.711761    1472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 16:37:36.711759    1472 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/id_rsa Username:docker}
	I0906 16:37:36.711807    1472 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/id_rsa Username:docker}
	I0906 16:37:36.750204    1472 ssh_runner.go:195] Run: systemctl --version
	I0906 16:37:36.791591    1472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 16:37:36.793448    1472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 16:37:36.793480    1472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 16:37:36.799360    1472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 16:37:36.799368    1472 start.go:466] detecting cgroup driver to use...
	I0906 16:37:36.799513    1472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 16:37:36.805322    1472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0906 16:37:36.808597    1472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 16:37:36.811997    1472 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 16:37:36.812021    1472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 16:37:36.814984    1472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 16:37:36.817916    1472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 16:37:36.821061    1472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 16:37:36.824735    1472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 16:37:36.828103    1472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 16:37:36.831083    1472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 16:37:36.833705    1472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 16:37:36.836694    1472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:37:36.904177    1472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 16:37:36.914733    1472 start.go:466] detecting cgroup driver to use...
	I0906 16:37:36.914802    1472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 16:37:36.920522    1472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 16:37:36.924763    1472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 16:37:36.932553    1472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 16:37:36.937548    1472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 16:37:36.942699    1472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 16:37:36.964886    1472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 16:37:36.969782    1472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 16:37:36.975098    1472 ssh_runner.go:195] Run: which cri-dockerd
	I0906 16:37:36.976300    1472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 16:37:36.978806    1472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 16:37:36.983477    1472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 16:37:37.047801    1472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 16:37:37.107532    1472 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 16:37:37.107545    1472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0906 16:37:37.113004    1472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:37:37.175565    1472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 16:37:38.341536    1472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.165979541s)
	I0906 16:37:38.341586    1472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 16:37:38.403577    1472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 16:37:38.462557    1472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 16:37:38.523486    1472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:37:38.583110    1472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 16:37:38.589860    1472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:37:38.651727    1472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 16:37:38.674988    1472 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 16:37:38.675073    1472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 16:37:38.677262    1472 start.go:534] Will wait 60s for crictl version
	I0906 16:37:38.677293    1472 ssh_runner.go:195] Run: which crictl
	I0906 16:37:38.678799    1472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 16:37:38.693294    1472 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0906 16:37:38.693365    1472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 16:37:38.703086    1472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 16:37:38.718658    1472 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0906 16:37:38.718740    1472 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 16:37:38.720207    1472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 16:37:38.724512    1472 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:37:38.724553    1472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 16:37:38.730087    1472 docker.go:636] Got preloaded images: 
	I0906 16:37:38.730096    1472 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0906 16:37:38.730134    1472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 16:37:38.733303    1472 ssh_runner.go:195] Run: which lz4
	I0906 16:37:38.734792    1472 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 16:37:38.736113    1472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 16:37:38.736129    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0906 16:37:40.084512    1472 docker.go:600] Took 1.349794 seconds to copy over tarball
	I0906 16:37:40.084568    1472 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 16:37:41.129667    1472 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.045102s)
	I0906 16:37:41.129683    1472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 16:37:41.145813    1472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 16:37:41.149122    1472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0906 16:37:41.154480    1472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:37:41.219789    1472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 16:37:43.459591    1472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.239833166s)
	I0906 16:37:43.459685    1472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 16:37:43.465676    1472 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 16:37:43.465688    1472 cache_images.go:84] Images are preloaded, skipping loading
	I0906 16:37:43.465755    1472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 16:37:43.473470    1472 cni.go:84] Creating CNI manager for ""
	I0906 16:37:43.473483    1472 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:37:43.473503    1472 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 16:37:43.473513    1472 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-654000 NodeName:addons-654000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 16:37:43.473589    1472 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-654000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 16:37:43.473635    1472 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-654000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-654000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 16:37:43.473689    1472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 16:37:43.477022    1472 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 16:37:43.477054    1472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 16:37:43.480295    1472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0906 16:37:43.485418    1472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 16:37:43.490660    1472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0906 16:37:43.495689    1472 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0906 16:37:43.497539    1472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 16:37:43.501180    1472 certs.go:56] Setting up /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000 for IP: 192.168.105.2
	I0906 16:37:43.501190    1472 certs.go:190] acquiring lock for shared ca certs: {Name:mk43c724e281040fff2ff442572568aeff9573b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:43.501341    1472 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key
	I0906 16:37:43.555838    1472 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt ...
	I0906 16:37:43.555848    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt: {Name:mkad470450a3d2528a14c0bae252f4dcde8be925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:43.556087    1472 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key ...
	I0906 16:37:43.556091    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key: {Name:mkd5cddb2bdf4c5f4c4f8e99efb24bea1842e658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:43.556208    1472 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key
	I0906 16:37:43.953261    1472 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.crt ...
	I0906 16:37:43.953278    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.crt: {Name:mkd62cec7734b701ae56e2bd74e939cff3597078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:43.953621    1472 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key ...
	I0906 16:37:43.953625    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key: {Name:mkcffe3cacb508f42cfc4b7923509933346871d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:43.953782    1472 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/client.key
	I0906 16:37:43.953790    1472 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/client.crt with IP's: []
	I0906 16:37:44.044511    1472 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/client.crt ...
	I0906 16:37:44.044515    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/client.crt: {Name:mk9b71d570f912dfdbe25a8c78b4fbab472f1dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:44.044700    1472 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/client.key ...
	I0906 16:37:44.044703    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/client.key: {Name:mk1899002f94631c4451b76634f1ba0b0d89885b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:44.044805    1472 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.key.96055969
	I0906 16:37:44.044816    1472 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 16:37:44.207636    1472 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.crt.96055969 ...
	I0906 16:37:44.207641    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.crt.96055969: {Name:mkc294bca0c636ca03d1016168b5fed592f1c9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:44.207796    1472 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.key.96055969 ...
	I0906 16:37:44.207799    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.key.96055969: {Name:mk88fe913993a4f8b2d54f5c53d933c7ed0487fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:44.207913    1472 certs.go:337] copying /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.crt
	I0906 16:37:44.208243    1472 certs.go:341] copying /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.key
	I0906 16:37:44.208364    1472 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.key
	I0906 16:37:44.208379    1472 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.crt with IP's: []
	I0906 16:37:44.338828    1472 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.crt ...
	I0906 16:37:44.338837    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.crt: {Name:mk44489ac809b4af28de469aa779931797108506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:44.339060    1472 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.key ...
	I0906 16:37:44.339064    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.key: {Name:mk0cb7195011525016358dbc562bedaab1358ed4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:44.339306    1472 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 16:37:44.339331    1472 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem (1082 bytes)
	I0906 16:37:44.339352    1472 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem (1123 bytes)
	I0906 16:37:44.339371    1472 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem (1679 bytes)
	I0906 16:37:44.339677    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 16:37:44.347944    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 16:37:44.355132    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 16:37:44.361910    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/addons-654000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 16:37:44.369103    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 16:37:44.376522    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 16:37:44.384075    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 16:37:44.391122    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 16:37:44.397704    1472 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 16:37:44.404789    1472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 16:37:44.410713    1472 ssh_runner.go:195] Run: openssl version
	I0906 16:37:44.412899    1472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 16:37:44.416393    1472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:37:44.418038    1472 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:37:44.418061    1472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:37:44.419882    1472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 16:37:44.422886    1472 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 16:37:44.424173    1472 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 16:37:44.424211    1472 kubeadm.go:404] StartCluster: {Name:addons-654000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-654000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:37:44.424278    1472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 16:37:44.433843    1472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 16:37:44.437148    1472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 16:37:44.440176    1472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 16:37:44.442761    1472 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 16:37:44.442792    1472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 16:37:44.467460    1472 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 16:37:44.467487    1472 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 16:37:44.531144    1472 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 16:37:44.531197    1472 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 16:37:44.531244    1472 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 16:37:44.590619    1472 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 16:37:44.598751    1472 out.go:204]   - Generating certificates and keys ...
	I0906 16:37:44.598782    1472 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 16:37:44.598818    1472 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 16:37:44.711652    1472 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 16:37:44.777832    1472 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 16:37:44.825394    1472 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 16:37:44.979292    1472 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 16:37:45.040434    1472 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 16:37:45.040492    1472 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-654000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0906 16:37:45.260649    1472 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 16:37:45.260709    1472 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-654000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0906 16:37:45.378862    1472 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 16:37:45.432711    1472 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 16:37:45.530182    1472 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 16:37:45.530210    1472 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 16:37:45.603992    1472 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 16:37:45.711301    1472 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 16:37:45.836162    1472 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 16:37:45.905022    1472 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 16:37:45.905265    1472 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 16:37:45.906546    1472 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 16:37:45.910932    1472 out.go:204]   - Booting up control plane ...
	I0906 16:37:45.910991    1472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 16:37:45.911033    1472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 16:37:45.911066    1472 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 16:37:45.914156    1472 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 16:37:45.914645    1472 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 16:37:45.914665    1472 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 16:37:45.983490    1472 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 16:37:49.987190    1472 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003939 seconds
	I0906 16:37:49.987245    1472 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 16:37:49.996251    1472 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 16:37:50.505377    1472 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 16:37:50.505536    1472 kubeadm.go:322] [mark-control-plane] Marking the node addons-654000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 16:37:51.010838    1472 kubeadm.go:322] [bootstrap-token] Using token: zfrz2h.xc65grv9r60mvtus
	I0906 16:37:51.019668    1472 out.go:204]   - Configuring RBAC rules ...
	I0906 16:37:51.019752    1472 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 16:37:51.019799    1472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 16:37:51.021687    1472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 16:37:51.023190    1472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 16:37:51.024351    1472 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 16:37:51.025598    1472 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 16:37:51.030191    1472 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 16:37:51.185143    1472 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 16:37:51.421387    1472 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 16:37:51.421398    1472 kubeadm.go:322] 
	I0906 16:37:51.421425    1472 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 16:37:51.421432    1472 kubeadm.go:322] 
	I0906 16:37:51.421473    1472 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 16:37:51.421477    1472 kubeadm.go:322] 
	I0906 16:37:51.421489    1472 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 16:37:51.421522    1472 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 16:37:51.421551    1472 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 16:37:51.421555    1472 kubeadm.go:322] 
	I0906 16:37:51.421589    1472 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 16:37:51.421593    1472 kubeadm.go:322] 
	I0906 16:37:51.421617    1472 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 16:37:51.421641    1472 kubeadm.go:322] 
	I0906 16:37:51.421672    1472 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 16:37:51.421711    1472 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 16:37:51.421744    1472 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 16:37:51.421746    1472 kubeadm.go:322] 
	I0906 16:37:51.421790    1472 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 16:37:51.421832    1472 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 16:37:51.421835    1472 kubeadm.go:322] 
	I0906 16:37:51.421886    1472 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zfrz2h.xc65grv9r60mvtus \
	I0906 16:37:51.421942    1472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5095446c3b17214aaa1a807af40fe852c4809cf7574bda1580a6e046d3ea63e1 \
	I0906 16:37:51.421954    1472 kubeadm.go:322] 	--control-plane 
	I0906 16:37:51.421959    1472 kubeadm.go:322] 
	I0906 16:37:51.422000    1472 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 16:37:51.422006    1472 kubeadm.go:322] 
	I0906 16:37:51.422041    1472 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zfrz2h.xc65grv9r60mvtus \
	I0906 16:37:51.422089    1472 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5095446c3b17214aaa1a807af40fe852c4809cf7574bda1580a6e046d3ea63e1 
	I0906 16:37:51.422141    1472 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 16:37:51.422150    1472 cni.go:84] Creating CNI manager for ""
	I0906 16:37:51.422158    1472 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:37:51.430612    1472 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 16:37:51.434663    1472 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 16:37:51.437679    1472 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 16:37:51.443804    1472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 16:37:51.443871    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:51.443891    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=addons-654000 minikube.k8s.io/updated_at=2023_09_06T16_37_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:51.449516    1472 ops.go:34] apiserver oom_adj: -16
	I0906 16:37:51.491990    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:51.531268    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:52.066801    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:52.566716    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:53.066721    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:53.565074    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:54.066743    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:54.566725    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:55.066691    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:55.566692    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:56.066690    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:56.566624    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:57.066638    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:57.566598    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:58.066650    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:58.565600    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:59.066609    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:37:59.566617    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:00.066614    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:00.566576    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:01.066574    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:01.566522    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:02.066568    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:02.566520    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:03.066580    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:03.566502    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:04.066536    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:04.566469    1472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:38:04.601904    1472 kubeadm.go:1081] duration metric: took 13.158348084s to wait for elevateKubeSystemPrivileges.
	I0906 16:38:04.601917    1472 kubeadm.go:406] StartCluster complete in 20.178120917s
	I0906 16:38:04.601926    1472 settings.go:142] acquiring lock: {Name:mke09ef7a1e2d249f8e4127472ec9f16828a9cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:38:04.602115    1472 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:38:04.602309    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/kubeconfig: {Name:mk4d1ce1d23510730a8780064cdf633efa514467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:38:04.602542    1472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 16:38:04.602598    1472 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0906 16:38:04.602658    1472 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-654000"
	I0906 16:38:04.602668    1472 addons.go:69] Setting volumesnapshots=true in profile "addons-654000"
	I0906 16:38:04.602669    1472 addons.go:69] Setting cloud-spanner=true in profile "addons-654000"
	I0906 16:38:04.602674    1472 addons.go:231] Setting addon volumesnapshots=true in "addons-654000"
	I0906 16:38:04.602677    1472 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-654000"
	I0906 16:38:04.602700    1472 addons.go:69] Setting storage-provisioner=true in profile "addons-654000"
	I0906 16:38:04.602707    1472 addons.go:231] Setting addon storage-provisioner=true in "addons-654000"
	I0906 16:38:04.602721    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.602733    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.602744    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.602679    1472 addons.go:231] Setting addon cloud-spanner=true in "addons-654000"
	I0906 16:38:04.602760    1472 addons.go:69] Setting ingress-dns=true in profile "addons-654000"
	I0906 16:38:04.602829    1472 addons.go:231] Setting addon ingress-dns=true in "addons-654000"
	I0906 16:38:04.602684    1472 addons.go:69] Setting gcp-auth=true in profile "addons-654000"
	I0906 16:38:04.602864    1472 mustload.go:65] Loading cluster: addons-654000
	I0906 16:38:04.602900    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.602945    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.602956    1472 config.go:182] Loaded profile config "addons-654000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:38:04.602691    1472 addons.go:69] Setting default-storageclass=true in profile "addons-654000"
	I0906 16:38:04.603032    1472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-654000"
	I0906 16:38:04.602692    1472 config.go:182] Loaded profile config "addons-654000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	W0906 16:38:04.603170    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603177    1472 addons_storage_classes.go:55] "addons-654000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0906 16:38:04.602695    1472 addons.go:69] Setting metrics-server=true in profile "addons-654000"
	I0906 16:38:04.603193    1472 addons.go:231] Setting addon metrics-server=true in "addons-654000"
	W0906 16:38:04.603259    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603266    1472 addons.go:277] "addons-654000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0906 16:38:04.602697    1472 addons.go:69] Setting inspektor-gadget=true in profile "addons-654000"
	I0906 16:38:04.603285    1472 addons.go:231] Setting addon inspektor-gadget=true in "addons-654000"
	I0906 16:38:04.603294    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.602697    1472 addons.go:69] Setting registry=true in profile "addons-654000"
	I0906 16:38:04.603313    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.603309    1472 addons.go:231] Setting addon registry=true in "addons-654000"
	I0906 16:38:04.603347    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.602755    1472 addons.go:69] Setting ingress=true in profile "addons-654000"
	I0906 16:38:04.603385    1472 addons.go:231] Setting addon ingress=true in "addons-654000"
	W0906 16:38:04.603453    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603459    1472 addons.go:277] "addons-654000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0906 16:38:04.603468    1472 host.go:66] Checking if "addons-654000" exists ...
	I0906 16:38:04.603179    1472 addons.go:231] Setting addon default-storageclass=true in "addons-654000"
	I0906 16:38:04.603484    1472 host.go:66] Checking if "addons-654000" exists ...
	W0906 16:38:04.603502    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603507    1472 addons.go:277] "addons-654000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0906 16:38:04.603509    1472 addons.go:467] Verifying addon metrics-server=true in "addons-654000"
	I0906 16:38:04.607638    1472 out.go:177] 
	W0906 16:38:04.603605    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603216    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603715    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603719    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.603787    1472 host.go:54] host status for "addons-654000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.610538    1472 addons.go:277] "addons-654000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0906 16:38:04.610554    1472 addons.go:277] "addons-654000" is not running, setting default-storageclass=true and skipping enablement (err=<nil>)
	W0906 16:38:04.610561    1472 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.610561    1472 addons.go:277] "addons-654000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0906 16:38:04.610565    1472 addons.go:277] "addons-654000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0906 16:38:04.610581    1472 addons.go:277] "addons-654000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0906 16:38:04.613640    1472 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/addons-654000/monitor: connect: connection refused
	W0906 16:38:04.616622    1472 out.go:239] * 
	* 
	I0906 16:38:04.616629    1472 addons.go:467] Verifying addon ingress=true in "addons-654000"
	I0906 16:38:04.616633    1472 addons.go:467] Verifying addon registry=true in "addons-654000"
	I0906 16:38:04.616632    1472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0906 16:38:04.620994    1472 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:38:04.627613    1472 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:38:04.622565    1472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-654000" context rescaled to 1 replicas
	I0906 16:38:04.638550    1472 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 16:38:04.641363    1472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 16:38:04.642596    1472 out.go:177] 
	I0906 16:38:04.642609    1472 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:38:04.649559    1472 out.go:177] * Verifying Kubernetes components...
	I0906 16:38:04.645600    1472 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:38:04.645602    1472 out.go:177] * Verifying registry addon...
	I0906 16:38:04.645603    1472 out.go:177] * Verifying ingress addon...
	I0906 16:38:04.657568    1472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-654000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (44.54s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-728000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-728000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.823942291s)

                                                
                                                
-- stdout --
	* [cert-options-728000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-728000 in cluster cert-options-728000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-728000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-728000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-728000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-728000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-728000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (78.026625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-728000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-728000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-728000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-728000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-728000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.705708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-728000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-728000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-728000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-06 16:50:02.920211 -0700 PDT m=+794.193837126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-728000 -n cert-options-728000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-728000 -n cert-options-728000: exit status 7 (29.212167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-728000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-728000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-728000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-033000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-033000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.84476475s)

                                                
                                                
-- stdout --
	* [cert-expiration-033000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-033000 in cluster cert-expiration-033000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-033000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-033000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-033000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.228516459s)

                                                
                                                
-- stdout --
	* [cert-expiration-033000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-033000 in cluster cert-expiration-033000
	* Restarting existing qemu2 VM for "cert-expiration-033000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-033000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-033000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-033000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-033000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-033000 in cluster cert-expiration-033000
	* Restarting existing qemu2 VM for "cert-expiration-033000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-033000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-033000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-06 16:53:02.896565 -0700 PDT m=+974.255141209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-033000 -n cert-expiration-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-033000 -n cert-expiration-033000: exit status 7 (74.282208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-033000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-033000
--- FAIL: TestCertExpiration (195.25s)

                                                
                                    
x
+
TestDockerFlags (9.97s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-152000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-152000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.721622291s)

                                                
                                                
-- stdout --
	* [docker-flags-152000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-152000 in cluster docker-flags-152000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:49:43.001737    3040 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:49:43.001858    3040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:43.001864    3040 out.go:309] Setting ErrFile to fd 2...
	I0906 16:49:43.001867    3040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:43.001971    3040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:49:43.002956    3040 out.go:303] Setting JSON to false
	I0906 16:49:43.018146    3040 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1157,"bootTime":1694043026,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:49:43.018213    3040 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:49:43.023259    3040 out.go:177] * [docker-flags-152000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:49:43.031301    3040 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:49:43.035273    3040 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:49:43.031365    3040 notify.go:220] Checking for updates...
	I0906 16:49:43.038323    3040 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:49:43.041308    3040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:49:43.044273    3040 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:49:43.047291    3040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:49:43.050655    3040 config.go:182] Loaded profile config "force-systemd-flag-819000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:49:43.050718    3040 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:49:43.050754    3040 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:49:43.055216    3040 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:49:43.062239    3040 start.go:298] selected driver: qemu2
	I0906 16:49:43.062246    3040 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:49:43.062252    3040 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:49:43.064235    3040 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:49:43.067218    3040 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:49:43.070421    3040 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0906 16:49:43.070440    3040 cni.go:84] Creating CNI manager for ""
	I0906 16:49:43.070447    3040 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:49:43.070451    3040 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:49:43.070457    3040 start_flags.go:321] config:
	{Name:docker-flags-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-152000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:49:43.074738    3040 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:49:43.082268    3040 out.go:177] * Starting control plane node docker-flags-152000 in cluster docker-flags-152000
	I0906 16:49:43.086301    3040 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:49:43.086328    3040 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:49:43.086341    3040 cache.go:57] Caching tarball of preloaded images
	I0906 16:49:43.086413    3040 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:49:43.086419    3040 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:49:43.086487    3040 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/docker-flags-152000/config.json ...
	I0906 16:49:43.086502    3040 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/docker-flags-152000/config.json: {Name:mkd94aad86816a01251505ce83ab7fd8a36a9f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:49:43.086730    3040 start.go:365] acquiring machines lock for docker-flags-152000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:43.086760    3040 start.go:369] acquired machines lock for "docker-flags-152000" in 24.75µs
	I0906 16:49:43.086771    3040 start.go:93] Provisioning new machine with config: &{Name:docker-flags-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-152000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:43.086803    3040 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:43.094290    3040 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:43.110132    3040 start.go:159] libmachine.API.Create for "docker-flags-152000" (driver="qemu2")
	I0906 16:49:43.110160    3040 client.go:168] LocalClient.Create starting
	I0906 16:49:43.110225    3040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:43.110248    3040 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:43.110257    3040 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:43.110300    3040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:43.110321    3040 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:43.110329    3040 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:43.110630    3040 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:43.224786    3040 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:43.297109    3040 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:43.297114    3040 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:43.297247    3040 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2
	I0906 16:49:43.305710    3040 main.go:141] libmachine: STDOUT: 
	I0906 16:49:43.305723    3040 main.go:141] libmachine: STDERR: 
	I0906 16:49:43.305788    3040 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2 +20000M
	I0906 16:49:43.312945    3040 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:43.312961    3040 main.go:141] libmachine: STDERR: 
	I0906 16:49:43.312978    3040 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2
	I0906 16:49:43.312985    3040 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:43.313031    3040 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:9f:70:0d:c2:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2
	I0906 16:49:43.314563    3040 main.go:141] libmachine: STDOUT: 
	I0906 16:49:43.314575    3040 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:43.314595    3040 client.go:171] LocalClient.Create took 204.433833ms
	I0906 16:49:45.316730    3040 start.go:128] duration metric: createHost completed in 2.229955042s
	I0906 16:49:45.317024    3040 start.go:83] releasing machines lock for "docker-flags-152000", held for 2.230299125s
	W0906 16:49:45.317092    3040 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:45.330184    3040 out.go:177] * Deleting "docker-flags-152000" in qemu2 ...
	W0906 16:49:45.347300    3040 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:45.347330    3040 start.go:687] Will try again in 5 seconds ...
	I0906 16:49:50.349420    3040 start.go:365] acquiring machines lock for docker-flags-152000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:50.349757    3040 start.go:369] acquired machines lock for "docker-flags-152000" in 272.667µs
	I0906 16:49:50.349872    3040 start.go:93] Provisioning new machine with config: &{Name:docker-flags-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-152000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:50.350166    3040 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:50.359794    3040 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:50.405238    3040 start.go:159] libmachine.API.Create for "docker-flags-152000" (driver="qemu2")
	I0906 16:49:50.405284    3040 client.go:168] LocalClient.Create starting
	I0906 16:49:50.405414    3040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:50.405484    3040 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:50.405504    3040 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:50.405582    3040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:50.405628    3040 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:50.405640    3040 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:50.406153    3040 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:50.537172    3040 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:50.635041    3040 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:50.635053    3040 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:50.635178    3040 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2
	I0906 16:49:50.643544    3040 main.go:141] libmachine: STDOUT: 
	I0906 16:49:50.643560    3040 main.go:141] libmachine: STDERR: 
	I0906 16:49:50.643620    3040 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2 +20000M
	I0906 16:49:50.650755    3040 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:50.650769    3040 main.go:141] libmachine: STDERR: 
	I0906 16:49:50.650780    3040 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2
	I0906 16:49:50.650786    3040 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:50.650828    3040 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:60:16:34:93:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/docker-flags-152000/disk.qcow2
	I0906 16:49:50.652384    3040 main.go:141] libmachine: STDOUT: 
	I0906 16:49:50.652398    3040 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:50.652409    3040 client.go:171] LocalClient.Create took 247.12325ms
	I0906 16:49:52.654510    3040 start.go:128] duration metric: createHost completed in 2.304368125s
	I0906 16:49:52.654608    3040 start.go:83] releasing machines lock for "docker-flags-152000", held for 2.304843334s
	W0906 16:49:52.655042    3040 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:52.664701    3040 out.go:177] 
	W0906 16:49:52.668876    3040 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:49:52.668911    3040 out.go:239] * 
	* 
	W0906 16:49:52.671662    3040 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:49:52.681885    3040 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-152000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-152000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-152000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (76.638542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-152000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-152000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-152000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-152000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-152000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-152000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (43.699708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-152000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-152000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-152000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-152000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-09-06 16:49:52.818506 -0700 PDT m=+784.091925876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-152000 -n docker-flags-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-152000 -n docker-flags-152000: exit status 7 (28.672209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-152000
--- FAIL: TestDockerFlags (9.97s)

                                                
                                    
x
+
TestForceSystemdFlag (11.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-819000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-819000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.477457792s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-819000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-819000 in cluster force-systemd-flag-819000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:49:36.248981    3015 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:49:36.249119    3015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:36.249122    3015 out.go:309] Setting ErrFile to fd 2...
	I0906 16:49:36.249124    3015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:36.249242    3015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:49:36.250319    3015 out.go:303] Setting JSON to false
	I0906 16:49:36.265869    3015 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1150,"bootTime":1694043026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:49:36.265933    3015 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:49:36.271313    3015 out.go:177] * [force-systemd-flag-819000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:49:36.278306    3015 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:49:36.278314    3015 notify.go:220] Checking for updates...
	I0906 16:49:36.281320    3015 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:49:36.284316    3015 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:49:36.288167    3015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:49:36.291262    3015 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:49:36.294341    3015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:49:36.297512    3015 config.go:182] Loaded profile config "force-systemd-env-891000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:49:36.297584    3015 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:49:36.297623    3015 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:49:36.302291    3015 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:49:36.309163    3015 start.go:298] selected driver: qemu2
	I0906 16:49:36.309169    3015 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:49:36.309177    3015 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:49:36.310994    3015 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:49:36.314179    3015 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:49:36.317359    3015 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 16:49:36.317391    3015 cni.go:84] Creating CNI manager for ""
	I0906 16:49:36.317399    3015 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:49:36.317405    3015 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:49:36.317410    3015 start_flags.go:321] config:
	{Name:force-systemd-flag-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-819000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:49:36.321581    3015 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:49:36.328190    3015 out.go:177] * Starting control plane node force-systemd-flag-819000 in cluster force-systemd-flag-819000
	I0906 16:49:36.332235    3015 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:49:36.332251    3015 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:49:36.332265    3015 cache.go:57] Caching tarball of preloaded images
	I0906 16:49:36.332314    3015 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:49:36.332320    3015 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:49:36.332368    3015 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/force-systemd-flag-819000/config.json ...
	I0906 16:49:36.332381    3015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/force-systemd-flag-819000/config.json: {Name:mkec4efb345f857413e03bfba97cf61ccbae1b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:49:36.332595    3015 start.go:365] acquiring machines lock for force-systemd-flag-819000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:36.332627    3015 start.go:369] acquired machines lock for "force-systemd-flag-819000" in 24.375µs
	I0906 16:49:36.332639    3015 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-819000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:36.332666    3015 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:36.337224    3015 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:36.352464    3015 start.go:159] libmachine.API.Create for "force-systemd-flag-819000" (driver="qemu2")
	I0906 16:49:36.352484    3015 client.go:168] LocalClient.Create starting
	I0906 16:49:36.352566    3015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:36.352596    3015 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:36.352609    3015 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:36.352650    3015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:36.352667    3015 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:36.352677    3015 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:36.353043    3015 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:36.467484    3015 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:36.601557    3015 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:36.601564    3015 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:36.601713    3015 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2
	I0906 16:49:36.610712    3015 main.go:141] libmachine: STDOUT: 
	I0906 16:49:36.610724    3015 main.go:141] libmachine: STDERR: 
	I0906 16:49:36.610771    3015 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2 +20000M
	I0906 16:49:36.617869    3015 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:36.617881    3015 main.go:141] libmachine: STDERR: 
	I0906 16:49:36.617900    3015 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2
	I0906 16:49:36.617907    3015 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:36.617943    3015 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:31:79:d6:71:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2
	I0906 16:49:36.619470    3015 main.go:141] libmachine: STDOUT: 
	I0906 16:49:36.619481    3015 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:36.619499    3015 client.go:171] LocalClient.Create took 267.014542ms
	I0906 16:49:38.621658    3015 start.go:128] duration metric: createHost completed in 2.289008875s
	I0906 16:49:38.621734    3015 start.go:83] releasing machines lock for "force-systemd-flag-819000", held for 2.289143875s
	W0906 16:49:38.622062    3015 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:38.632377    3015 out.go:177] * Deleting "force-systemd-flag-819000" in qemu2 ...
	W0906 16:49:38.654013    3015 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:38.654042    3015 start.go:687] Will try again in 5 seconds ...
	I0906 16:49:43.656175    3015 start.go:365] acquiring machines lock for force-systemd-flag-819000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:45.317167    3015 start.go:369] acquired machines lock for "force-systemd-flag-819000" in 1.660880875s
	I0906 16:49:45.317308    3015 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-819000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:45.317617    3015 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:45.324277    3015 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:45.370516    3015 start.go:159] libmachine.API.Create for "force-systemd-flag-819000" (driver="qemu2")
	I0906 16:49:45.370554    3015 client.go:168] LocalClient.Create starting
	I0906 16:49:45.370689    3015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:45.370742    3015 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:45.370772    3015 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:45.370844    3015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:45.370884    3015 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:45.370898    3015 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:45.371520    3015 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:45.500596    3015 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:45.633687    3015 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:45.633693    3015 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:45.633839    3015 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2
	I0906 16:49:45.642805    3015 main.go:141] libmachine: STDOUT: 
	I0906 16:49:45.642817    3015 main.go:141] libmachine: STDERR: 
	I0906 16:49:45.642892    3015 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2 +20000M
	I0906 16:49:45.650028    3015 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:45.650040    3015 main.go:141] libmachine: STDERR: 
	I0906 16:49:45.650055    3015 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2
	I0906 16:49:45.650062    3015 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:45.650106    3015 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:4b:b4:10:03:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-flag-819000/disk.qcow2
	I0906 16:49:45.651620    3015 main.go:141] libmachine: STDOUT: 
	I0906 16:49:45.651632    3015 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:45.651646    3015 client.go:171] LocalClient.Create took 281.090583ms
	I0906 16:49:47.653777    3015 start.go:128] duration metric: createHost completed in 2.336179667s
	I0906 16:49:47.653868    3015 start.go:83] releasing machines lock for "force-systemd-flag-819000", held for 2.336699s
	W0906 16:49:47.654388    3015 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:47.665487    3015 out.go:177] 
	W0906 16:49:47.670324    3015 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:49:47.670352    3015 out.go:239] * 
	* 
	W0906 16:49:47.672923    3015 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:49:47.682202    3015 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-819000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-819000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-819000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.699ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-819000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-819000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-06 16:49:47.776301 -0700 PDT m=+779.049617709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-819000 -n force-systemd-flag-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-819000 -n force-systemd-flag-819000: exit status 7 (33.587041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-819000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-819000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-819000
--- FAIL: TestForceSystemdFlag (11.69s)

                                                
                                    
x
+
TestForceSystemdEnv (10.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-891000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-891000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.919389083s)

                                                
                                                
-- stdout --
	* [force-systemd-env-891000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-891000 in cluster force-systemd-env-891000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:49:32.871371    2995 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:49:32.871523    2995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:32.871526    2995 out.go:309] Setting ErrFile to fd 2...
	I0906 16:49:32.871528    2995 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:49:32.871651    2995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:49:32.872787    2995 out.go:303] Setting JSON to false
	I0906 16:49:32.889656    2995 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1146,"bootTime":1694043026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:49:32.889724    2995 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:49:32.895120    2995 out.go:177] * [force-systemd-env-891000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:49:32.899212    2995 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:49:32.903141    2995 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:49:32.899250    2995 notify.go:220] Checking for updates...
	I0906 16:49:32.908579    2995 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:49:32.912129    2995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:49:32.915137    2995 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:49:32.918128    2995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0906 16:49:32.921446    2995 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:49:32.921495    2995 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:49:32.925146    2995 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:49:32.932095    2995 start.go:298] selected driver: qemu2
	I0906 16:49:32.932107    2995 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:49:32.932115    2995 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:49:32.934427    2995 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:49:32.937095    2995 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:49:32.941168    2995 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 16:49:32.941203    2995 cni.go:84] Creating CNI manager for ""
	I0906 16:49:32.941211    2995 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:49:32.941217    2995 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:49:32.941221    2995 start_flags.go:321] config:
	{Name:force-systemd-env-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:49:32.946426    2995 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:49:32.955076    2995 out.go:177] * Starting control plane node force-systemd-env-891000 in cluster force-systemd-env-891000
	I0906 16:49:32.959106    2995 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:49:32.959125    2995 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:49:32.959137    2995 cache.go:57] Caching tarball of preloaded images
	I0906 16:49:32.959190    2995 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:49:32.959195    2995 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:49:32.959272    2995 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/force-systemd-env-891000/config.json ...
	I0906 16:49:32.959284    2995 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/force-systemd-env-891000/config.json: {Name:mk8507176db59e820c988b787201714f2b7312d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:49:32.959497    2995 start.go:365] acquiring machines lock for force-systemd-env-891000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:32.959527    2995 start.go:369] acquired machines lock for "force-systemd-env-891000" in 21.375µs
	I0906 16:49:32.959537    2995 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:32.959565    2995 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:32.965113    2995 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:32.978971    2995 start.go:159] libmachine.API.Create for "force-systemd-env-891000" (driver="qemu2")
	I0906 16:49:32.979002    2995 client.go:168] LocalClient.Create starting
	I0906 16:49:32.979060    2995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:32.979092    2995 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:32.979104    2995 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:32.979147    2995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:32.979164    2995 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:32.979174    2995 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:32.979499    2995 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:33.100950    2995 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:33.314755    2995 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:33.314771    2995 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:33.314923    2995 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2
	I0906 16:49:33.323588    2995 main.go:141] libmachine: STDOUT: 
	I0906 16:49:33.323605    2995 main.go:141] libmachine: STDERR: 
	I0906 16:49:33.323659    2995 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2 +20000M
	I0906 16:49:33.331130    2995 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:33.331144    2995 main.go:141] libmachine: STDERR: 
	I0906 16:49:33.331167    2995 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2
	I0906 16:49:33.331174    2995 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:33.331206    2995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:e9:d6:b9:a6:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2
	I0906 16:49:33.332732    2995 main.go:141] libmachine: STDOUT: 
	I0906 16:49:33.332746    2995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:33.332765    2995 client.go:171] LocalClient.Create took 353.76625ms
	I0906 16:49:35.334880    2995 start.go:128] duration metric: createHost completed in 2.375336542s
	I0906 16:49:35.334935    2995 start.go:83] releasing machines lock for "force-systemd-env-891000", held for 2.375447167s
	W0906 16:49:35.335014    2995 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:35.346444    2995 out.go:177] * Deleting "force-systemd-env-891000" in qemu2 ...
	W0906 16:49:35.366415    2995 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:35.366445    2995 start.go:687] Will try again in 5 seconds ...
	I0906 16:49:40.367254    2995 start.go:365] acquiring machines lock for force-systemd-env-891000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:49:40.367776    2995 start.go:369] acquired machines lock for "force-systemd-env-891000" in 394.541µs
	I0906 16:49:40.368020    2995 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:49:40.368322    2995 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:49:40.377876    2995 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 16:49:40.423863    2995 start.go:159] libmachine.API.Create for "force-systemd-env-891000" (driver="qemu2")
	I0906 16:49:40.423919    2995 client.go:168] LocalClient.Create starting
	I0906 16:49:40.424140    2995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:49:40.424220    2995 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:40.424241    2995 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:40.424333    2995 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:49:40.424374    2995 main.go:141] libmachine: Decoding PEM data...
	I0906 16:49:40.424392    2995 main.go:141] libmachine: Parsing certificate...
	I0906 16:49:40.424962    2995 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:49:40.548756    2995 main.go:141] libmachine: Creating SSH key...
	I0906 16:49:40.701005    2995 main.go:141] libmachine: Creating Disk image...
	I0906 16:49:40.701018    2995 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:49:40.701163    2995 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2
	I0906 16:49:40.709999    2995 main.go:141] libmachine: STDOUT: 
	I0906 16:49:40.710013    2995 main.go:141] libmachine: STDERR: 
	I0906 16:49:40.710069    2995 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2 +20000M
	I0906 16:49:40.717265    2995 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:49:40.717277    2995 main.go:141] libmachine: STDERR: 
	I0906 16:49:40.717289    2995 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2
	I0906 16:49:40.717295    2995 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:49:40.717338    2995 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:c2:0e:3c:b7:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/force-systemd-env-891000/disk.qcow2
	I0906 16:49:40.718880    2995 main.go:141] libmachine: STDOUT: 
	I0906 16:49:40.718892    2995 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:49:40.718916    2995 client.go:171] LocalClient.Create took 294.995458ms
	I0906 16:49:42.721088    2995 start.go:128] duration metric: createHost completed in 2.352774334s
	I0906 16:49:42.721222    2995 start.go:83] releasing machines lock for "force-systemd-env-891000", held for 2.353376916s
	W0906 16:49:42.721606    2995 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:42.731374    2995 out.go:177] 
	W0906 16:49:42.735460    2995 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:49:42.735484    2995 out.go:239] * 
	* 
	W0906 16:49:42.738203    2995 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:49:42.747351    2995 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-891000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-891000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-891000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.521792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-891000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-891000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-06 16:49:42.843094 -0700 PDT m=+774.116309876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-891000 -n force-systemd-env-891000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-891000 -n force-systemd-env-891000: exit status 7 (33.274958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-891000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-891000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-891000
--- FAIL: TestForceSystemdEnv (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-526000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-526000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-97mtg" [626d0d7d-2027-4b9f-97f5-90ff966efaf4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-97mtg" [626d0d7d-2027-4b9f-97f5-90ff966efaf4] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008025875s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31840
functional_test.go:1660: error fetching http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31840: Get "http://192.168.105.4:31840": dial tcp 192.168.105.4:31840: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-526000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-97mtg
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-526000/192.168.105.4
Start Time:       Wed, 06 Sep 2023 16:41:15 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://1afd41d6410ec533cfbff705c6357c16d120b851b4b135548ad3a5739968d340
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 06 Sep 2023 16:41:31 -0700
Finished:     Wed, 06 Sep 2023 16:41:31 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcc7d (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-bcc7d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-97mtg to functional-526000
Normal   Pulled     16s (x3 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    16s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    15s (x3 over 30s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-97mtg_default(626d0d7d-2027-4b9f-97f5-90ff966efaf4)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-526000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-526000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.86.43
IPs:                      10.103.86.43
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31840/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-526000 -n functional-526000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2172964743/001:/mount-9p     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh -- ls                                                                                         | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh cat                                                                                           | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | /mount-9p/test-1694043693363508000                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh stat                                                                                          | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | /mount-9p/created-by-test                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh stat                                                                                          | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | /mount-9p/created-by-pod                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh sudo                                                                                          | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port606528201/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | -T /mount-9p | grep 9p                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh -- ls                                                                                         | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | -la /mount-9p                                                                                                       |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh sudo                                                                                          | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | umount -f /mount-9p                                                                                                 |                   |         |         |                     |                     |
	| mount     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount1  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| mount     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount2  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount3  |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | -T /mount1                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | -T /mount2                                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-526000 ssh findmnt                                                                                       | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|           | -T /mount3                                                                                                          |                   |         |         |                     |                     |
	| mount     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | --kill=true                                                                                                         |                   |         |         |                     |                     |
	| start     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-526000 --dry-run                                                                                      | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| start     | -p functional-526000                                                                                                | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | --dry-run --memory                                                                                                  |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                      |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|           | -p functional-526000                                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                              |                   |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 16:41:40
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 16:41:40.493089    2082 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:41:40.493188    2082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:40.493191    2082 out.go:309] Setting ErrFile to fd 2...
	I0906 16:41:40.493194    2082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:40.493316    2082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:41:40.494622    2082 out.go:303] Setting JSON to false
	I0906 16:41:40.511308    2082 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":674,"bootTime":1694043026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:41:40.511395    2082 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:41:40.515558    2082 out.go:177] * [functional-526000] minikube v1.31.2 sur Darwin 13.5.1 (arm64)
	I0906 16:41:40.522487    2082 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:41:40.522539    2082 notify.go:220] Checking for updates...
	I0906 16:41:40.529372    2082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:41:40.532469    2082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:41:40.535476    2082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:41:40.536881    2082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:41:40.539485    2082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:41:40.542720    2082 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:41:40.542954    2082 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:41:40.546334    2082 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0906 16:41:40.553443    2082 start.go:298] selected driver: qemu2
	I0906 16:41:40.553448    2082 start.go:902] validating driver "qemu2" against &{Name:functional-526000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-526000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:41:40.553522    2082 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:41:40.559527    2082 out.go:177] 
	W0906 16:41:40.563469    2082 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 16:41:40.567537    2082 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-09-06 23:38:53 UTC, ends at Wed 2023-09-06 23:41:47 UTC. --
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.441540958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.441896994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.441912285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.441958660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.449634378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.449689544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.449975498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:41:41 functional-526000 dockerd[6612]: time="2023-09-06T23:41:41.449991081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:41 functional-526000 cri-dockerd[6871]: time="2023-09-06T23:41:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0d3e4982ffc917a622f3f2773d41cd5450f226e381f731137ddc859d838a28a2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 23:41:41 functional-526000 cri-dockerd[6871]: time="2023-09-06T23:41:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a199837489039270b47190f7f420144a4e416cfb2ace595fae443cc8553f204c/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 23:41:41 functional-526000 dockerd[6606]: time="2023-09-06T23:41:41.842102775Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 06 23:41:42 functional-526000 dockerd[6612]: time="2023-09-06T23:41:42.995070139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:41:42 functional-526000 dockerd[6612]: time="2023-09-06T23:41:42.995095222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:42 functional-526000 dockerd[6612]: time="2023-09-06T23:41:42.995101222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:41:42 functional-526000 dockerd[6612]: time="2023-09-06T23:41:42.995105263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:43 functional-526000 dockerd[6606]: time="2023-09-06T23:41:43.054845407Z" level=info msg="ignoring event" container=d1a03e523f44b61e9142f2ce52ee6cbdb7b040ea54bd51c7f4890098542ab424 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:41:43 functional-526000 dockerd[6612]: time="2023-09-06T23:41:43.054860782Z" level=info msg="shim disconnected" id=d1a03e523f44b61e9142f2ce52ee6cbdb7b040ea54bd51c7f4890098542ab424 namespace=moby
	Sep 06 23:41:43 functional-526000 dockerd[6612]: time="2023-09-06T23:41:43.055041612Z" level=warning msg="cleaning up after shim disconnected" id=d1a03e523f44b61e9142f2ce52ee6cbdb7b040ea54bd51c7f4890098542ab424 namespace=moby
	Sep 06 23:41:43 functional-526000 dockerd[6612]: time="2023-09-06T23:41:43.055045987Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 23:41:43 functional-526000 cri-dockerd[6871]: time="2023-09-06T23:41:43Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 06 23:41:43 functional-526000 dockerd[6612]: time="2023-09-06T23:41:43.544240419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:41:43 functional-526000 dockerd[6612]: time="2023-09-06T23:41:43.544293626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:43 functional-526000 dockerd[6612]: time="2023-09-06T23:41:43.544309418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:41:43 functional-526000 dockerd[6612]: time="2023-09-06T23:41:43.544320209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:41:43 functional-526000 dockerd[6606]: time="2023-09-06T23:41:43.698225685Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID
	48dac5100f2a5       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 seconds ago        Running             dashboard-metrics-scraper   0                   0d3e4982ffc91
	d1a03e523f44b       72565bf5bbedf                                                                                          5 seconds ago        Exited              echoserver-arm              3                   2ca60650a38b1
	c7d2f7286d982       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    12 seconds ago       Exited              mount-munger                0                   a80b7cf100aa5
	1afd41d6410ec       72565bf5bbedf                                                                                          17 seconds ago       Exited              echoserver-arm              2                   b682419696e12
	519c418434eb5       nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c                          22 seconds ago       Running             myfrontend                  0                   9fe4a4b366525
	78bc7cc25011a       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                          39 seconds ago       Running             nginx                       0                   8ae40736cdd4f
	fc88323840050       97e04611ad434                                                                                          About a minute ago   Running             coredns                     2                   6de0be978b2da
	7e87eae85aa51       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         1                   d7ae9e50e84d4
	40f14a12e3845       812f5241df7fd                                                                                          About a minute ago   Running             kube-proxy                  2                   09b0b172511c2
	45df9d50050dc       b29fb62480892                                                                                          About a minute ago   Running             kube-apiserver              0                   be6b3918d554a
	cc014ef04c2f0       b4a5a57e99492                                                                                          About a minute ago   Running             kube-scheduler              2                   418dcaca581fa
	b7793daccbf5d       8b6e1980b7584                                                                                          About a minute ago   Running             kube-controller-manager     2                   333dc1b689620
	386b84351f22e       9cdd6470f48c8                                                                                          About a minute ago   Running             etcd                        2                   6c6524ad903b2
	3ac65899f1546       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         0                   3c4a369f65132
	29074846cba67       97e04611ad434                                                                                          2 minutes ago        Exited              coredns                     1                   656b03d91a36a
	25dd8bced5731       812f5241df7fd                                                                                          2 minutes ago        Exited              kube-proxy                  1                   1766d2e4c950f
	e8792ea8dd448       8b6e1980b7584                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   ffb08cf27c417
	ae0de609e7ee5       9cdd6470f48c8                                                                                          2 minutes ago        Exited              etcd                        1                   0c2c73a963b55
	c34f068717514       b4a5a57e99492                                                                                          2 minutes ago        Exited              kube-scheduler              1                   ac52eb31337f8
	
	* 
	* ==> coredns [29074846cba6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44288 - 45483 "HINFO IN 5602981056590628840.5023268703407769840. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004979382s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [fc8832384005] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35245 - 39946 "HINFO IN 6532247101021519723.1201329728894811343. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004171636s
	[INFO] 10.244.0.1:36032 - 31270 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000523619s
	[INFO] 10.244.0.1:50089 - 26842 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000093707s
	[INFO] 10.244.0.1:29446 - 35014 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000037083s
	[INFO] 10.244.0.1:35910 - 4814 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001047904s
	[INFO] 10.244.0.1:44451 - 30691 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000071833s
	[INFO] 10.244.0.1:14726 - 60183 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000147207s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-526000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-526000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=functional-526000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T16_39_10_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 23:39:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-526000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 23:41:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 23:41:27 +0000   Wed, 06 Sep 2023 23:39:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 23:41:27 +0000   Wed, 06 Sep 2023 23:39:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 23:41:27 +0000   Wed, 06 Sep 2023 23:39:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 23:41:27 +0000   Wed, 06 Sep 2023 23:39:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-526000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 0195d31c1d044dfbadd712ee23dd0a18
	  System UUID:                0195d31c1d044dfbadd712ee23dd0a18
	  Boot ID:                    35849058-1da7-41f2-88f4-863469105801
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-vgzjl                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  default                     hello-node-connect-7799dfb7c6-97mtg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 coredns-5dd5756b68-zcqkc                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m24s
	  kube-system                 etcd-functional-526000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-apiserver-functional-526000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-functional-526000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-proxy-wlhms                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-scheduler-functional-526000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-x567f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7q685         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m23s              kube-proxy       
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 2m1s               kube-proxy       
	  Normal  Starting                 2m37s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m37s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m37s              kubelet          Node functional-526000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s              kubelet          Node functional-526000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s              kubelet          Node functional-526000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m34s              kubelet          Node functional-526000 status is now: NodeReady
	  Normal  RegisteredNode           2m25s              node-controller  Node functional-526000 event: Registered Node functional-526000 in Controller
	  Normal  RegisteredNode           109s               node-controller  Node functional-526000 event: Registered Node functional-526000 in Controller
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s (x9 over 84s)  kubelet          Node functional-526000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x7 over 84s)  kubelet          Node functional-526000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 84s)  kubelet          Node functional-526000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           68s                node-controller  Node functional-526000 event: Registered Node functional-526000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.430469] systemd-fstab-generator[3709]: Ignoring "noauto" for root device
	[  +0.135565] systemd-fstab-generator[3741]: Ignoring "noauto" for root device
	[  +0.083492] systemd-fstab-generator[3752]: Ignoring "noauto" for root device
	[  +0.082836] systemd-fstab-generator[3765]: Ignoring "noauto" for root device
	[  +1.386461] kauditd_printk_skb: 28 callbacks suppressed
	[ +10.021727] systemd-fstab-generator[4357]: Ignoring "noauto" for root device
	[  +0.073525] systemd-fstab-generator[4368]: Ignoring "noauto" for root device
	[  +0.065978] systemd-fstab-generator[4379]: Ignoring "noauto" for root device
	[  +0.069499] systemd-fstab-generator[4390]: Ignoring "noauto" for root device
	[  +0.085335] systemd-fstab-generator[4465]: Ignoring "noauto" for root device
	[  +6.049185] kauditd_printk_skb: 34 callbacks suppressed
	[Sep 6 23:40] systemd-fstab-generator[6151]: Ignoring "noauto" for root device
	[  +0.126664] systemd-fstab-generator[6184]: Ignoring "noauto" for root device
	[  +0.084116] systemd-fstab-generator[6195]: Ignoring "noauto" for root device
	[  +0.085884] systemd-fstab-generator[6208]: Ignoring "noauto" for root device
	[ +11.413078] systemd-fstab-generator[6757]: Ignoring "noauto" for root device
	[  +0.064867] systemd-fstab-generator[6768]: Ignoring "noauto" for root device
	[  +0.065415] systemd-fstab-generator[6779]: Ignoring "noauto" for root device
	[  +0.070091] systemd-fstab-generator[6790]: Ignoring "noauto" for root device
	[  +0.093304] systemd-fstab-generator[6864]: Ignoring "noauto" for root device
	[  +1.039126] systemd-fstab-generator[7114]: Ignoring "noauto" for root device
	[  +4.613695] kauditd_printk_skb: 29 callbacks suppressed
	[ +26.260492] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.009856] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Sep 6 23:41] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [386b84351f22] <==
	* {"level":"info","ts":"2023-09-06T23:40:24.805305Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T23:40:24.805325Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T23:40:24.805268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-06T23:40:24.805404Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-06T23:40:24.805462Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T23:40:24.805494Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T23:40:24.807036Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-06T23:40:24.807149Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-06T23:40:24.807176Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-06T23:40:24.807233Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T23:40:24.807252Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T23:40:26.401909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-06T23:40:26.402075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-06T23:40:26.402133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-06T23:40:26.402165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-06T23:40:26.402185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-06T23:40:26.402254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-06T23:40:26.402298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-06T23:40:26.407288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T23:40:26.407607Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T23:40:26.40979Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T23:40:26.410048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T23:40:26.410095Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-06T23:40:26.407287Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-526000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T23:40:26.409808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	* 
	* ==> etcd [ae0de609e7ee] <==
	* {"level":"info","ts":"2023-09-06T23:39:43.547019Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T23:39:45.344932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-06T23:39:45.345093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-06T23:39:45.345141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-06T23:39:45.345179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-06T23:39:45.345197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-06T23:39:45.345223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-06T23:39:45.345251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-06T23:39:45.347985Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T23:39:45.347989Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-526000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T23:39:45.348267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T23:39:45.350259Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T23:39:45.350291Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-06T23:39:45.35069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T23:39:45.350717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-06T23:40:11.166347Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-06T23:40:11.166384Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-526000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-06T23:40:11.166457Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-06T23:40:11.166473Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-06T23:40:11.166516Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-06T23:40:11.166552Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-06T23:40:11.176211Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-06T23:40:11.177878Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T23:40:11.177916Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-06T23:40:11.17792Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-526000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  23:41:47 up 2 min,  0 users,  load average: 0.26, 0.13, 0.04
	Linux functional-526000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [45df9d50050d] <==
	* I0906 23:40:27.079900       1 shared_informer.go:318] Caches are synced for configmaps
	I0906 23:40:27.080157       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0906 23:40:27.080206       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 23:40:27.080877       1 aggregator.go:166] initial CRD sync complete...
	I0906 23:40:27.080908       1 autoregister_controller.go:141] Starting autoregister controller
	I0906 23:40:27.080922       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 23:40:27.080938       1 cache.go:39] Caches are synced for autoregister controller
	E0906 23:40:27.082828       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0906 23:40:27.083402       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0906 23:40:27.984155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 23:40:28.613101       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0906 23:40:28.616328       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0906 23:40:28.628267       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0906 23:40:28.639762       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 23:40:28.642197       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 23:40:39.133402       1 controller.go:624] quota admission added evaluator for: endpoints
	I0906 23:40:39.179664       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 23:40:48.982662       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.241.116"}
	I0906 23:40:54.654240       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0906 23:40:54.704553       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.227.197"}
	I0906 23:41:04.973764       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.170.9"}
	I0906 23:41:15.410634       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.86.43"}
	I0906 23:41:41.017900       1 controller.go:624] quota admission added evaluator for: namespaces
	I0906 23:41:41.084373       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.7.135"}
	I0906 23:41:41.098824       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.113.213"}
	
	* 
	* ==> kube-controller-manager [b7793daccbf5] <==
	* E0906 23:41:41.062300       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:41:41.062321       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:41:41.062330       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:41:41.062282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="7.58472ms"
	E0906 23:41:41.062335       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:41:41.065729       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="1.403354ms"
	E0906 23:41:41.065751       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:41:41.065790       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:41:41.068107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="2.303216ms"
	E0906 23:41:41.068252       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0906 23:41:41.068244       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0906 23:41:41.089991       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-7q685"
	I0906 23:41:41.099243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.773518ms"
	I0906 23:41:41.115226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.411437ms"
	I0906 23:41:41.115444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.416µs"
	I0906 23:41:41.116038       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-x567f"
	I0906 23:41:41.127191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="18.060189ms"
	I0906 23:41:41.132004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.417µs"
	I0906 23:41:41.133036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="5.821205ms"
	I0906 23:41:41.133104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="17.375µs"
	I0906 23:41:41.137081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="13.583µs"
	I0906 23:41:42.965949       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="25.291µs"
	I0906 23:41:43.655186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="36.458µs"
	I0906 23:41:43.682292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="5.794413ms"
	I0906 23:41:43.682325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="18.625µs"
	
	* 
	* ==> kube-controller-manager [e8792ea8dd44] <==
	* I0906 23:39:58.672300       1 shared_informer.go:318] Caches are synced for crt configmap
	I0906 23:39:58.673576       1 shared_informer.go:318] Caches are synced for daemon sets
	I0906 23:39:58.673585       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0906 23:39:58.675697       1 shared_informer.go:318] Caches are synced for job
	I0906 23:39:58.676172       1 shared_informer.go:318] Caches are synced for TTL
	I0906 23:39:58.677335       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0906 23:39:58.681072       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0906 23:39:58.681131       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 23:39:58.681150       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 23:39:58.681198       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 23:39:58.762992       1 shared_informer.go:318] Caches are synced for taint
	I0906 23:39:58.763065       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0906 23:39:58.763117       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-526000"
	I0906 23:39:58.763156       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0906 23:39:58.763066       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0906 23:39:58.763179       1 taint_manager.go:211] "Sending events to api server"
	I0906 23:39:58.763268       1 event.go:307] "Event occurred" object="functional-526000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-526000 event: Registered Node functional-526000 in Controller"
	I0906 23:39:58.766899       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 23:39:58.773998       1 shared_informer.go:318] Caches are synced for resource quota
	I0906 23:39:58.811801       1 shared_informer.go:318] Caches are synced for PV protection
	I0906 23:39:58.870964       1 shared_informer.go:318] Caches are synced for persistent volume
	I0906 23:39:58.870987       1 shared_informer.go:318] Caches are synced for attach detach
	I0906 23:39:59.179779       1 shared_informer.go:318] Caches are synced for garbage collector
	I0906 23:39:59.179851       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0906 23:39:59.188008       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [25dd8bced573] <==
	* I0906 23:39:44.084117       1 server_others.go:69] "Using iptables proxy"
	I0906 23:39:45.991593       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0906 23:39:46.007875       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0906 23:39:46.007891       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 23:39:46.008511       1 server_others.go:152] "Using iptables Proxier"
	I0906 23:39:46.008529       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 23:39:46.008594       1 server.go:846] "Version info" version="v1.28.1"
	I0906 23:39:46.008598       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:39:46.008892       1 config.go:188] "Starting service config controller"
	I0906 23:39:46.008900       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 23:39:46.008907       1 config.go:97] "Starting endpoint slice config controller"
	I0906 23:39:46.008909       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 23:39:46.009108       1 config.go:315] "Starting node config controller"
	I0906 23:39:46.009110       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 23:39:46.109448       1 shared_informer.go:318] Caches are synced for node config
	I0906 23:39:46.109835       1 shared_informer.go:318] Caches are synced for service config
	I0906 23:39:46.109890       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [40f14a12e384] <==
	* I0906 23:40:28.536971       1 server_others.go:69] "Using iptables proxy"
	I0906 23:40:28.541642       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0906 23:40:28.550119       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0906 23:40:28.550128       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 23:40:28.551498       1 server_others.go:152] "Using iptables Proxier"
	I0906 23:40:28.551521       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0906 23:40:28.551582       1 server.go:846] "Version info" version="v1.28.1"
	I0906 23:40:28.551592       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:40:28.551926       1 config.go:188] "Starting service config controller"
	I0906 23:40:28.551936       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0906 23:40:28.551961       1 config.go:97] "Starting endpoint slice config controller"
	I0906 23:40:28.551963       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0906 23:40:28.552568       1 config.go:315] "Starting node config controller"
	I0906 23:40:28.552572       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0906 23:40:28.653047       1 shared_informer.go:318] Caches are synced for node config
	I0906 23:40:28.653068       1 shared_informer.go:318] Caches are synced for service config
	I0906 23:40:28.653080       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c34f06871751] <==
	* I0906 23:39:43.637462       1 serving.go:348] Generated self-signed cert in-memory
	W0906 23:39:45.955017       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 23:39:45.955065       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 23:39:45.955074       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 23:39:45.955081       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 23:39:45.988900       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 23:39:45.988950       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:39:45.991339       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 23:39:45.991715       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 23:39:45.991772       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 23:39:45.991781       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 23:39:46.092888       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 23:40:11.161020       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0906 23:40:11.161048       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0906 23:40:11.161091       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [cc014ef04c2f] <==
	* I0906 23:40:25.270053       1 serving.go:348] Generated self-signed cert in-memory
	W0906 23:40:27.015001       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 23:40:27.015093       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 23:40:27.015127       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 23:40:27.015148       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 23:40:27.047430       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0906 23:40:27.047445       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 23:40:27.048132       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 23:40:27.048162       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 23:40:27.048906       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 23:40:27.049389       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 23:40:27.149191       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 23:38:53 UTC, ends at Wed 2023-09-06 23:41:47 UTC. --
	Sep 06 23:41:34 functional-526000 kubelet[7120]: I0906 23:41:34.346348    7120 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2mp\" (UniqueName: \"kubernetes.io/projected/47890341-b8e1-40fd-8ee3-75d463d96181-kube-api-access-2n2mp\") pod \"busybox-mount\" (UID: \"47890341-b8e1-40fd-8ee3-75d463d96181\") " pod="default/busybox-mount"
	Sep 06 23:41:34 functional-526000 kubelet[7120]: I0906 23:41:34.346371    7120 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/47890341-b8e1-40fd-8ee3-75d463d96181-test-volume\") pod \"busybox-mount\" (UID: \"47890341-b8e1-40fd-8ee3-75d463d96181\") " pod="default/busybox-mount"
	Sep 06 23:41:37 functional-526000 kubelet[7120]: I0906 23:41:37.768467    7120 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/47890341-b8e1-40fd-8ee3-75d463d96181-test-volume\") pod \"47890341-b8e1-40fd-8ee3-75d463d96181\" (UID: \"47890341-b8e1-40fd-8ee3-75d463d96181\") "
	Sep 06 23:41:37 functional-526000 kubelet[7120]: I0906 23:41:37.768506    7120 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n2mp\" (UniqueName: \"kubernetes.io/projected/47890341-b8e1-40fd-8ee3-75d463d96181-kube-api-access-2n2mp\") pod \"47890341-b8e1-40fd-8ee3-75d463d96181\" (UID: \"47890341-b8e1-40fd-8ee3-75d463d96181\") "
	Sep 06 23:41:37 functional-526000 kubelet[7120]: I0906 23:41:37.768507    7120 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47890341-b8e1-40fd-8ee3-75d463d96181-test-volume" (OuterVolumeSpecName: "test-volume") pod "47890341-b8e1-40fd-8ee3-75d463d96181" (UID: "47890341-b8e1-40fd-8ee3-75d463d96181"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 23:41:37 functional-526000 kubelet[7120]: I0906 23:41:37.768533    7120 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/47890341-b8e1-40fd-8ee3-75d463d96181-test-volume\") on node \"functional-526000\" DevicePath \"\""
	Sep 06 23:41:37 functional-526000 kubelet[7120]: I0906 23:41:37.769165    7120 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47890341-b8e1-40fd-8ee3-75d463d96181-kube-api-access-2n2mp" (OuterVolumeSpecName: "kube-api-access-2n2mp") pod "47890341-b8e1-40fd-8ee3-75d463d96181" (UID: "47890341-b8e1-40fd-8ee3-75d463d96181"). InnerVolumeSpecName "kube-api-access-2n2mp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 23:41:37 functional-526000 kubelet[7120]: I0906 23:41:37.869290    7120 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2n2mp\" (UniqueName: \"kubernetes.io/projected/47890341-b8e1-40fd-8ee3-75d463d96181-kube-api-access-2n2mp\") on node \"functional-526000\" DevicePath \"\""
	Sep 06 23:41:38 functional-526000 kubelet[7120]: I0906 23:41:38.511810    7120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a80b7cf100aa5fc58497d9e0ec07681fe7c97e064490cf4642fb2c52370fe8e1"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.097551    7120 topology_manager.go:215] "Topology Admit Handler" podUID="a4b546c3-f78f-461b-bf2d-a731a916aae1" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-7q685"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: E0906 23:41:41.097599    7120 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47890341-b8e1-40fd-8ee3-75d463d96181" containerName="mount-munger"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.097615    7120 memory_manager.go:346] "RemoveStaleState removing state" podUID="47890341-b8e1-40fd-8ee3-75d463d96181" containerName="mount-munger"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.122089    7120 topology_manager.go:215] "Topology Admit Handler" podUID="eb49986d-7d7d-4949-a605-fc6c4a0ef4ea" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-x567f"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.285885    7120 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jc5q\" (UniqueName: \"kubernetes.io/projected/eb49986d-7d7d-4949-a605-fc6c4a0ef4ea-kube-api-access-2jc5q\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-x567f\" (UID: \"eb49986d-7d7d-4949-a605-fc6c4a0ef4ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-x567f"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.285927    7120 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a4b546c3-f78f-461b-bf2d-a731a916aae1-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7q685\" (UID: \"a4b546c3-f78f-461b-bf2d-a731a916aae1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7q685"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.285940    7120 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9v8r\" (UniqueName: \"kubernetes.io/projected/a4b546c3-f78f-461b-bf2d-a731a916aae1-kube-api-access-k9v8r\") pod \"kubernetes-dashboard-8694d4445c-7q685\" (UID: \"a4b546c3-f78f-461b-bf2d-a731a916aae1\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7q685"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.285950    7120 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/eb49986d-7d7d-4949-a605-fc6c4a0ef4ea-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-x567f\" (UID: \"eb49986d-7d7d-4949-a605-fc6c4a0ef4ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-x567f"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.615310    7120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a199837489039270b47190f7f420144a4e416cfb2ace595fae443cc8553f204c"
	Sep 06 23:41:41 functional-526000 kubelet[7120]: I0906 23:41:41.622281    7120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3e4982ffc917a622f3f2773d41cd5450f226e381f731137ddc859d838a28a2"
	Sep 06 23:41:42 functional-526000 kubelet[7120]: I0906 23:41:42.954049    7120 scope.go:117] "RemoveContainer" containerID="1afd41d6410ec533cfbff705c6357c16d120b851b4b135548ad3a5739968d340"
	Sep 06 23:41:42 functional-526000 kubelet[7120]: E0906 23:41:42.954152    7120 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-97mtg_default(626d0d7d-2027-4b9f-97f5-90ff966efaf4)\"" pod="default/hello-node-connect-7799dfb7c6-97mtg" podUID="626d0d7d-2027-4b9f-97f5-90ff966efaf4"
	Sep 06 23:41:42 functional-526000 kubelet[7120]: I0906 23:41:42.955554    7120 scope.go:117] "RemoveContainer" containerID="cd820dbdc075098e3657cbb0860b14048f0b49b46f0fe1992f1f15cd4a1713e9"
	Sep 06 23:41:43 functional-526000 kubelet[7120]: I0906 23:41:43.640914    7120 scope.go:117] "RemoveContainer" containerID="cd820dbdc075098e3657cbb0860b14048f0b49b46f0fe1992f1f15cd4a1713e9"
	Sep 06 23:41:43 functional-526000 kubelet[7120]: I0906 23:41:43.641072    7120 scope.go:117] "RemoveContainer" containerID="d1a03e523f44b61e9142f2ce52ee6cbdb7b040ea54bd51c7f4890098542ab424"
	Sep 06 23:41:43 functional-526000 kubelet[7120]: E0906 23:41:43.641159    7120 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-vgzjl_default(5a08dcb5-e101-4f1c-aee7-7a332e7b8f34)\"" pod="default/hello-node-759d89bdcc-vgzjl" podUID="5a08dcb5-e101-4f1c-aee7-7a332e7b8f34"
	
	* 
	* ==> storage-provisioner [3ac65899f154] <==
	* I0906 23:39:56.775529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:39:56.780269       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:39:56.780305       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:39:56.783298       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:39:56.783440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48d02cc5-fb7a-4326-89af-86769c2e95de", APIVersion:"v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-526000_910e3967-ae1c-4dcc-aef2-02ba604acb0e became leader
	I0906 23:39:56.783464       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-526000_910e3967-ae1c-4dcc-aef2-02ba604acb0e!
	I0906 23:39:56.884427       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-526000_910e3967-ae1c-4dcc-aef2-02ba604acb0e!
	
	* 
	* ==> storage-provisioner [7e87eae85aa5] <==
	* I0906 23:40:28.527738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:40:28.535113       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:40:28.535208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:40:45.922050       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:40:45.922112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-526000_2ba90629-8c80-4099-95c4-ce43a435b74a!
	I0906 23:40:45.922989       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48d02cc5-fb7a-4326-89af-86769c2e95de", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-526000_2ba90629-8c80-4099-95c4-ce43a435b74a became leader
	I0906 23:40:46.022764       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-526000_2ba90629-8c80-4099-95c4-ce43a435b74a!
	I0906 23:41:11.391487       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0906 23:41:11.392153       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f979713b-7d61-4aab-b02e-a378dc550afd", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0906 23:41:11.391574       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ee9d5201-b1a0-48d3-b699-e2c0f60e61a4 363 0 2023-09-06 23:39:24 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-09-06 23:39:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f979713b-7d61-4aab-b02e-a378dc550afd &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f979713b-7d61-4aab-b02e-a378dc550afd 681 0 2023-09-06 23:41:11 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-09-06 23:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-09-06 23:41:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0906 23:41:11.392723       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f979713b-7d61-4aab-b02e-a378dc550afd" provisioned
	I0906 23:41:11.392758       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0906 23:41:11.392776       1 volume_store.go:212] Trying to save persistentvolume "pvc-f979713b-7d61-4aab-b02e-a378dc550afd"
	I0906 23:41:11.398963       1 volume_store.go:219] persistentvolume "pvc-f979713b-7d61-4aab-b02e-a378dc550afd" saved
	I0906 23:41:11.399602       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f979713b-7d61-4aab-b02e-a378dc550afd", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f979713b-7d61-4aab-b02e-a378dc550afd
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-526000 -n functional-526000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-526000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-8694d4445c-7q685
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-526000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-7q685
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-526000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-7q685: exit status 1 (44.597417ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-526000/192.168.105.4
	Start Time:       Wed, 06 Sep 2023 16:41:34 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://c7d2f7286d98245d171ac81b993180f6c288ec0e8724bdacd02512e0050b58a6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 06 Sep 2023 16:41:36 -0700
	      Finished:     Wed, 06 Sep 2023 16:41:36 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2n2mp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2n2mp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-526000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.183s (1.183s including waiting)
	  Normal  Created    12s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-7q685" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-526000 describe pod busybox-mount kubernetes-dashboard-8694d4445c-7q685: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0906 16:41:04.700825    1929 out.go:296] Setting OutFile to fd 1 ...
I0906 16:41:04.701067    1929 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:04.701069    1929 out.go:309] Setting ErrFile to fd 2...
I0906 16:41:04.701071    1929 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:04.701206    1929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
I0906 16:41:04.701424    1929 mustload.go:65] Loading cluster: functional-526000
I0906 16:41:04.701612    1929 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:04.706697    1929 out.go:177] 
W0906 16:41:04.710797    1929 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/monitor: connect: connection refused
W0906 16:41:04.710807    1929 out.go:239] * 
* 
W0906 16:41:04.712225    1929 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0906 16:41:04.715713    1929 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1928: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-320000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-320000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in f702a89cc30d
	Removing intermediate container f702a89cc30d
	 ---> 9e4db99e3a97
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in cfee71eb1c5a
	Removing intermediate container cfee71eb1c5a
	 ---> f3033ef3f5e7
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in ead6248b5d42
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-320000 -n image-320000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-320000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-526000                                                                                               | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-526000 ssh findmnt                                                                                      | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-526000                                                                                               | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-526000 ssh findmnt                                                                                      | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-526000 ssh findmnt                                                                                      | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-526000 ssh findmnt                                                                                      | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-526000                                                                                               | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start          | -p functional-526000                                                                                               | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-526000 --dry-run                                                                                     | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-526000                                                                                               | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                                                                 | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | -p functional-526000                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-526000                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-526000                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-526000                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| image          | functional-526000                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format short                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-526000                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format yaml                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| ssh            | functional-526000 ssh pgrep                                                                                        | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | buildkitd                                                                                                          |                   |         |         |                     |                     |
	| image          | functional-526000 image build -t                                                                                   | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | localhost/my-image:functional-526000                                                                               |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                   |                   |         |         |                     |                     |
	| image          | functional-526000 image ls                                                                                         | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	| image          | functional-526000                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format json                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-526000                                                                                                  | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format table                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| delete         | -p functional-526000                                                                                               | functional-526000 | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	| start          | -p image-320000 --driver=qemu2                                                                                     | image-320000      | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:42 PDT |
	|                |                                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-320000      | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:42 PDT |
	|                | ./testdata/image-build/test-normal                                                                                 |                   |         |         |                     |                     |
	|                | -p image-320000                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-320000      | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:42 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                                                                           |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                                                                               |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                                                                                 |                   |         |         |                     |                     |
	|                | image-320000                                                                                                       |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 16:41:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 16:41:51.562140    2138 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:41:51.562257    2138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:51.562258    2138 out.go:309] Setting ErrFile to fd 2...
	I0906 16:41:51.562260    2138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:51.562369    2138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:41:51.563335    2138 out.go:303] Setting JSON to false
	I0906 16:41:51.578570    2138 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":685,"bootTime":1694043026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:41:51.578639    2138 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:41:51.582211    2138 out.go:177] * [image-320000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:41:51.590039    2138 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:41:51.590069    2138 notify.go:220] Checking for updates...
	I0906 16:41:51.594069    2138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:41:51.596950    2138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:41:51.600064    2138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:41:51.603070    2138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:41:51.609967    2138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:41:51.613224    2138 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:41:51.616999    2138 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:41:51.624061    2138 start.go:298] selected driver: qemu2
	I0906 16:41:51.624065    2138 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:41:51.624072    2138 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:41:51.624143    2138 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:41:51.627080    2138 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:41:51.630080    2138 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 16:41:51.630172    2138 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 16:41:51.630191    2138 cni.go:84] Creating CNI manager for ""
	I0906 16:41:51.630199    2138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:41:51.630203    2138 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:41:51.630207    2138 start_flags.go:321] config:
	{Name:image-320000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:image-320000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:41:51.634492    2138 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:41:51.641092    2138 out.go:177] * Starting control plane node image-320000 in cluster image-320000
	I0906 16:41:51.644978    2138 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:41:51.645007    2138 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:41:51.645027    2138 cache.go:57] Caching tarball of preloaded images
	I0906 16:41:51.645102    2138 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:41:51.645108    2138 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:41:51.645319    2138 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/config.json ...
	I0906 16:41:51.645330    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/config.json: {Name:mkc30b792cb8b6da1210d42d1eca1d9902becf3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:41:51.645539    2138 start.go:365] acquiring machines lock for image-320000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:41:51.645565    2138 start.go:369] acquired machines lock for "image-320000" in 22.583µs
	I0906 16:41:51.645574    2138 start.go:93] Provisioning new machine with config: &{Name:image-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:image-320000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:41:51.645609    2138 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:41:51.652028    2138 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 16:41:51.673286    2138 start.go:159] libmachine.API.Create for "image-320000" (driver="qemu2")
	I0906 16:41:51.673319    2138 client.go:168] LocalClient.Create starting
	I0906 16:41:51.673413    2138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:41:51.673452    2138 main.go:141] libmachine: Decoding PEM data...
	I0906 16:41:51.673461    2138 main.go:141] libmachine: Parsing certificate...
	I0906 16:41:51.673500    2138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:41:51.673516    2138 main.go:141] libmachine: Decoding PEM data...
	I0906 16:41:51.673524    2138 main.go:141] libmachine: Parsing certificate...
	I0906 16:41:51.673856    2138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:41:51.790479    2138 main.go:141] libmachine: Creating SSH key...
	I0906 16:41:51.910893    2138 main.go:141] libmachine: Creating Disk image...
	I0906 16:41:51.910896    2138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:41:51.911030    2138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/disk.qcow2
	I0906 16:41:51.921494    2138 main.go:141] libmachine: STDOUT: 
	I0906 16:41:51.921508    2138 main.go:141] libmachine: STDERR: 
	I0906 16:41:51.921565    2138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/disk.qcow2 +20000M
	I0906 16:41:51.928883    2138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:41:51.928903    2138 main.go:141] libmachine: STDERR: 
	I0906 16:41:51.928925    2138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/disk.qcow2
	I0906 16:41:51.928930    2138 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:41:51.928967    2138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:90:d3:f1:f8:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/disk.qcow2
	I0906 16:41:51.970503    2138 main.go:141] libmachine: STDOUT: 
	I0906 16:41:51.970517    2138 main.go:141] libmachine: STDERR: 
	I0906 16:41:51.970520    2138 main.go:141] libmachine: Attempt 0
	I0906 16:41:51.970531    2138 main.go:141] libmachine: Searching for 3e:90:d3:f1:f8:6 in /var/db/dhcpd_leases ...
	I0906 16:41:51.970653    2138 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 16:41:51.970670    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:41:51.970676    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:41:51.970681    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:41:53.972822    2138 main.go:141] libmachine: Attempt 1
	I0906 16:41:53.972881    2138 main.go:141] libmachine: Searching for 3e:90:d3:f1:f8:6 in /var/db/dhcpd_leases ...
	I0906 16:41:53.973244    2138 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 16:41:53.973285    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:41:53.973312    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:41:53.973385    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:41:55.975497    2138 main.go:141] libmachine: Attempt 2
	I0906 16:41:55.975514    2138 main.go:141] libmachine: Searching for 3e:90:d3:f1:f8:6 in /var/db/dhcpd_leases ...
	I0906 16:41:55.975642    2138 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 16:41:55.975652    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:41:55.975657    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:41:55.975661    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:41:57.977658    2138 main.go:141] libmachine: Attempt 3
	I0906 16:41:57.977663    2138 main.go:141] libmachine: Searching for 3e:90:d3:f1:f8:6 in /var/db/dhcpd_leases ...
	I0906 16:41:57.977701    2138 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 16:41:57.977705    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:41:57.977710    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:41:57.977714    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:41:59.979719    2138 main.go:141] libmachine: Attempt 4
	I0906 16:41:59.979725    2138 main.go:141] libmachine: Searching for 3e:90:d3:f1:f8:6 in /var/db/dhcpd_leases ...
	I0906 16:41:59.979868    2138 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 16:41:59.979882    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:41:59.979890    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:41:59.979894    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:01.981900    2138 main.go:141] libmachine: Attempt 5
	I0906 16:42:01.981915    2138 main.go:141] libmachine: Searching for 3e:90:d3:f1:f8:6 in /var/db/dhcpd_leases ...
	I0906 16:42:01.981989    2138 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0906 16:42:01.981997    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:42:01.982001    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:42:01.982005    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:03.984041    2138 main.go:141] libmachine: Attempt 6
	I0906 16:42:03.984060    2138 main.go:141] libmachine: Searching for 3e:90:d3:f1:f8:6 in /var/db/dhcpd_leases ...
	I0906 16:42:03.984197    2138 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 16:42:03.984209    2138 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:3e:90:d3:f1:f8:6 ID:1,3e:90:d3:f1:f8:6 Lease:0x64fa5fca}
	I0906 16:42:03.984212    2138 main.go:141] libmachine: Found match: 3e:90:d3:f1:f8:6
	I0906 16:42:03.984223    2138 main.go:141] libmachine: IP: 192.168.105.5
	I0906 16:42:03.984228    2138 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0906 16:42:04.993193    2138 machine.go:88] provisioning docker machine ...
	I0906 16:42:04.993210    2138 buildroot.go:166] provisioning hostname "image-320000"
	I0906 16:42:04.993254    2138 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:04.993510    2138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d63b0] 0x1010d8e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 16:42:04.993514    2138 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-320000 && echo "image-320000" | sudo tee /etc/hostname
	I0906 16:42:05.064499    2138 main.go:141] libmachine: SSH cmd err, output: <nil>: image-320000
	
	I0906 16:42:05.064562    2138 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:05.064837    2138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d63b0] 0x1010d8e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 16:42:05.064843    2138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-320000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-320000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-320000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 16:42:05.127224    2138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 16:42:05.127231    2138 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17174-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17174-979/.minikube}
	I0906 16:42:05.127241    2138 buildroot.go:174] setting up certificates
	I0906 16:42:05.127245    2138 provision.go:83] configureAuth start
	I0906 16:42:05.127247    2138 provision.go:138] copyHostCerts
	I0906 16:42:05.127312    2138 exec_runner.go:144] found /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem, removing ...
	I0906 16:42:05.127317    2138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem
	I0906 16:42:05.127429    2138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem (1082 bytes)
	I0906 16:42:05.127593    2138 exec_runner.go:144] found /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem, removing ...
	I0906 16:42:05.127595    2138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem
	I0906 16:42:05.127638    2138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem (1123 bytes)
	I0906 16:42:05.127740    2138 exec_runner.go:144] found /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem, removing ...
	I0906 16:42:05.127744    2138 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem
	I0906 16:42:05.127782    2138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem (1679 bytes)
	I0906 16:42:05.127875    2138 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem org=jenkins.image-320000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-320000]
	I0906 16:42:05.228184    2138 provision.go:172] copyRemoteCerts
	I0906 16:42:05.228214    2138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 16:42:05.228219    2138 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/id_rsa Username:docker}
	I0906 16:42:05.261961    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 16:42:05.269066    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 16:42:05.275825    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 16:42:05.282827    2138 provision.go:86] duration metric: configureAuth took 155.577833ms
	I0906 16:42:05.282832    2138 buildroot.go:189] setting minikube options for container-runtime
	I0906 16:42:05.282922    2138 config.go:182] Loaded profile config "image-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:42:05.282949    2138 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:05.283161    2138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d63b0] 0x1010d8e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 16:42:05.283164    2138 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 16:42:05.347311    2138 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 16:42:05.347319    2138 buildroot.go:70] root file system type: tmpfs
	I0906 16:42:05.347435    2138 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 16:42:05.347495    2138 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:05.347748    2138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d63b0] 0x1010d8e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 16:42:05.347782    2138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 16:42:05.416536    2138 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 16:42:05.416581    2138 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:05.416854    2138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d63b0] 0x1010d8e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 16:42:05.416861    2138 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 16:42:05.782392    2138 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 16:42:05.782404    2138 machine.go:91] provisioned docker machine in 789.217667ms
	I0906 16:42:05.782410    2138 client.go:171] LocalClient.Create took 14.109376209s
	I0906 16:42:05.782420    2138 start.go:167] duration metric: libmachine.API.Create for "image-320000" took 14.109427042s
	I0906 16:42:05.782422    2138 start.go:300] post-start starting for "image-320000" (driver="qemu2")
	I0906 16:42:05.782426    2138 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 16:42:05.782510    2138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 16:42:05.782520    2138 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/id_rsa Username:docker}
	I0906 16:42:05.818041    2138 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 16:42:05.819633    2138 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 16:42:05.819640    2138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17174-979/.minikube/addons for local assets ...
	I0906 16:42:05.819711    2138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17174-979/.minikube/files for local assets ...
	I0906 16:42:05.819820    2138 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem -> 13972.pem in /etc/ssl/certs
	I0906 16:42:05.819935    2138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 16:42:05.822972    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem --> /etc/ssl/certs/13972.pem (1708 bytes)
	I0906 16:42:05.830008    2138 start.go:303] post-start completed in 47.5825ms
	I0906 16:42:05.830351    2138 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/config.json ...
	I0906 16:42:05.830501    2138 start.go:128] duration metric: createHost completed in 14.185178792s
	I0906 16:42:05.830523    2138 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:05.830737    2138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010d63b0] 0x1010d8e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0906 16:42:05.830740    2138 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 16:42:05.896332    2138 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694043725.514386918
	
	I0906 16:42:05.896336    2138 fix.go:206] guest clock: 1694043725.514386918
	I0906 16:42:05.896340    2138 fix.go:219] Guest: 2023-09-06 16:42:05.514386918 -0700 PDT Remote: 2023-09-06 16:42:05.830504 -0700 PDT m=+14.289039917 (delta=-316.117082ms)
	I0906 16:42:05.896349    2138 fix.go:190] guest clock delta is within tolerance: -316.117082ms
	I0906 16:42:05.896350    2138 start.go:83] releasing machines lock for "image-320000", held for 14.251074125s
	I0906 16:42:05.896615    2138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 16:42:05.896617    2138 ssh_runner.go:195] Run: cat /version.json
	I0906 16:42:05.896622    2138 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/id_rsa Username:docker}
	I0906 16:42:05.896634    2138 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/id_rsa Username:docker}
	I0906 16:42:05.973854    2138 ssh_runner.go:195] Run: systemctl --version
	I0906 16:42:05.976317    2138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 16:42:05.978437    2138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 16:42:05.978463    2138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 16:42:05.984290    2138 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 16:42:05.984295    2138 start.go:466] detecting cgroup driver to use...
	I0906 16:42:05.984373    2138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 16:42:05.990488    2138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0906 16:42:05.994131    2138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 16:42:05.997577    2138 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 16:42:05.997601    2138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 16:42:06.000688    2138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 16:42:06.003479    2138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 16:42:06.006799    2138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 16:42:06.009678    2138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 16:42:06.012829    2138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 16:42:06.016334    2138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 16:42:06.019157    2138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 16:42:06.022081    2138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:06.098422    2138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 16:42:06.105165    2138 start.go:466] detecting cgroup driver to use...
	I0906 16:42:06.105229    2138 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 16:42:06.110858    2138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 16:42:06.115983    2138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 16:42:06.121807    2138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 16:42:06.126337    2138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 16:42:06.131163    2138 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 16:42:06.177293    2138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 16:42:06.182285    2138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 16:42:06.187654    2138 ssh_runner.go:195] Run: which cri-dockerd
	I0906 16:42:06.189021    2138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 16:42:06.191490    2138 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 16:42:06.196690    2138 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 16:42:06.271118    2138 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 16:42:06.352220    2138 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 16:42:06.352229    2138 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0906 16:42:06.357438    2138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:06.435582    2138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 16:42:07.603105    2138 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.167536542s)
	I0906 16:42:07.603171    2138 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 16:42:07.690888    2138 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 16:42:07.769800    2138 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 16:42:07.858664    2138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:07.935570    2138 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 16:42:07.942766    2138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:08.027617    2138 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0906 16:42:08.051711    2138 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 16:42:08.051785    2138 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 16:42:08.054537    2138 start.go:534] Will wait 60s for crictl version
	I0906 16:42:08.054571    2138 ssh_runner.go:195] Run: which crictl
	I0906 16:42:08.056144    2138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 16:42:08.071573    2138 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0906 16:42:08.071634    2138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 16:42:08.081121    2138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 16:42:08.092061    2138 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0906 16:42:08.092204    2138 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 16:42:08.093642    2138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 16:42:08.097222    2138 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:42:08.097260    2138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 16:42:08.102814    2138 docker.go:636] Got preloaded images: 
	I0906 16:42:08.102817    2138 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0906 16:42:08.102845    2138 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 16:42:08.105960    2138 ssh_runner.go:195] Run: which lz4
	I0906 16:42:08.107251    2138 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 16:42:08.108544    2138 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 16:42:08.108553    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0906 16:42:09.434095    2138 docker.go:600] Took 1.326910 seconds to copy over tarball
	I0906 16:42:09.434147    2138 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 16:42:10.454445    2138 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.02030575s)
	I0906 16:42:10.454454    2138 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 16:42:10.470354    2138 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 16:42:10.473680    2138 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0906 16:42:10.478874    2138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:10.557106    2138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 16:42:12.019542    2138 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.462451791s)
	I0906 16:42:12.019638    2138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 16:42:12.025678    2138 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 16:42:12.025684    2138 cache_images.go:84] Images are preloaded, skipping loading
	I0906 16:42:12.025741    2138 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 16:42:12.033227    2138 cni.go:84] Creating CNI manager for ""
	I0906 16:42:12.033232    2138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:42:12.033240    2138 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 16:42:12.033247    2138 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-320000 NodeName:image-320000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 16:42:12.033320    2138 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-320000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 16:42:12.033360    2138 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-320000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:image-320000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 16:42:12.033410    2138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0906 16:42:12.036400    2138 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 16:42:12.036424    2138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 16:42:12.039180    2138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0906 16:42:12.044521    2138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 16:42:12.049225    2138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0906 16:42:12.054516    2138 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0906 16:42:12.056084    2138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 16:42:12.059985    2138 certs.go:56] Setting up /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000 for IP: 192.168.105.5
	I0906 16:42:12.059992    2138 certs.go:190] acquiring lock for shared ca certs: {Name:mk43c724e281040fff2ff442572568aeff9573b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:12.060124    2138 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key
	I0906 16:42:12.060160    2138 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key
	I0906 16:42:12.060194    2138 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/client.key
	I0906 16:42:12.060198    2138 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/client.crt with IP's: []
	I0906 16:42:12.194860    2138 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/client.crt ...
	I0906 16:42:12.194863    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/client.crt: {Name:mk97dae4717ffbf01c7e632fd7aab395991aaa24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:12.195125    2138 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/client.key ...
	I0906 16:42:12.195128    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/client.key: {Name:mkb4071310f706c81e393d37333b17b64d899801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:12.195238    2138 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.key.e69b33ca
	I0906 16:42:12.195245    2138 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 16:42:12.413125    2138 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.crt.e69b33ca ...
	I0906 16:42:12.413130    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.crt.e69b33ca: {Name:mk97997bc159f9329f08a2c0baebb5712537131c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:12.413373    2138 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.key.e69b33ca ...
	I0906 16:42:12.413375    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.key.e69b33ca: {Name:mkb195a561e125bca83666ab0a6da18a37566094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:12.413489    2138 certs.go:337] copying /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.crt
	I0906 16:42:12.413703    2138 certs.go:341] copying /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.key
	I0906 16:42:12.413794    2138 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.key
	I0906 16:42:12.413799    2138 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.crt with IP's: []
	I0906 16:42:12.506645    2138 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.crt ...
	I0906 16:42:12.506647    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.crt: {Name:mk40628e6a48f699b0705929fa20b4abc0ac0039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:12.506778    2138 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.key ...
	I0906 16:42:12.506779    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.key: {Name:mk83cde743e9b6a0b748d84f43ebba713a6db72d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:12.507012    2138 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/1397.pem (1338 bytes)
	W0906 16:42:12.507040    2138 certs.go:433] ignoring /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/1397_empty.pem, impossibly tiny 0 bytes
	I0906 16:42:12.507046    2138 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 16:42:12.507064    2138 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem (1082 bytes)
	I0906 16:42:12.507082    2138 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem (1123 bytes)
	I0906 16:42:12.507099    2138 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem (1679 bytes)
	I0906 16:42:12.507139    2138 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem (1708 bytes)
	I0906 16:42:12.507461    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 16:42:12.515183    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 16:42:12.522552    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 16:42:12.530123    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/image-320000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 16:42:12.537314    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 16:42:12.543844    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 16:42:12.550790    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 16:42:12.558136    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 16:42:12.565410    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem --> /usr/share/ca-certificates/13972.pem (1708 bytes)
	I0906 16:42:12.572238    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 16:42:12.578902    2138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/certs/1397.pem --> /usr/share/ca-certificates/1397.pem (1338 bytes)
	I0906 16:42:12.586180    2138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 16:42:12.591439    2138 ssh_runner.go:195] Run: openssl version
	I0906 16:42:12.593549    2138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13972.pem && ln -fs /usr/share/ca-certificates/13972.pem /etc/ssl/certs/13972.pem"
	I0906 16:42:12.596576    2138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13972.pem
	I0906 16:42:12.598139    2138 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:38 /usr/share/ca-certificates/13972.pem
	I0906 16:42:12.598161    2138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13972.pem
	I0906 16:42:12.600082    2138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13972.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 16:42:12.603340    2138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 16:42:12.606858    2138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:42:12.608611    2138 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:42:12.608629    2138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:42:12.610343    2138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 16:42:12.613623    2138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1397.pem && ln -fs /usr/share/ca-certificates/1397.pem /etc/ssl/certs/1397.pem"
	I0906 16:42:12.616633    2138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1397.pem
	I0906 16:42:12.618167    2138 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:38 /usr/share/ca-certificates/1397.pem
	I0906 16:42:12.618184    2138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1397.pem
	I0906 16:42:12.620080    2138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1397.pem /etc/ssl/certs/51391683.0"
	I0906 16:42:12.623289    2138 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 16:42:12.624555    2138 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 16:42:12.624582    2138 kubeadm.go:404] StartCluster: {Name:image-320000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:image-320000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:42:12.624649    2138 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 16:42:12.632963    2138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 16:42:12.635842    2138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 16:42:12.638840    2138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 16:42:12.642033    2138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 16:42:12.642045    2138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 16:42:12.663850    2138 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0906 16:42:12.663875    2138 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 16:42:12.718948    2138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 16:42:12.719013    2138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 16:42:12.719068    2138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 16:42:12.775600    2138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 16:42:12.782759    2138 out.go:204]   - Generating certificates and keys ...
	I0906 16:42:12.782801    2138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 16:42:12.782841    2138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 16:42:12.871156    2138 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 16:42:12.942913    2138 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 16:42:12.998665    2138 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 16:42:13.158679    2138 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 16:42:13.253972    2138 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 16:42:13.254042    2138 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-320000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0906 16:42:13.293957    2138 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 16:42:13.294008    2138 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-320000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0906 16:42:13.340435    2138 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 16:42:13.410757    2138 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 16:42:13.658993    2138 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 16:42:13.659028    2138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 16:42:13.738797    2138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 16:42:13.869604    2138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 16:42:13.920405    2138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 16:42:14.133592    2138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 16:42:14.133823    2138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 16:42:14.134996    2138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 16:42:14.140952    2138 out.go:204]   - Booting up control plane ...
	I0906 16:42:14.141025    2138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 16:42:14.141078    2138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 16:42:14.141123    2138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 16:42:14.142755    2138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 16:42:14.142796    2138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 16:42:14.142819    2138 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 16:42:14.227391    2138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 16:42:17.729877    2138 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.503548 seconds
	I0906 16:42:17.729943    2138 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 16:42:17.735998    2138 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 16:42:18.248615    2138 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 16:42:18.248717    2138 kubeadm.go:322] [mark-control-plane] Marking the node image-320000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 16:42:18.753373    2138 kubeadm.go:322] [bootstrap-token] Using token: 600qkm.ckgffmd93ab9oqp5
	I0906 16:42:18.759662    2138 out.go:204]   - Configuring RBAC rules ...
	I0906 16:42:18.759719    2138 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 16:42:18.760684    2138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 16:42:18.767608    2138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 16:42:18.768885    2138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 16:42:18.770004    2138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 16:42:18.771298    2138 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 16:42:18.775355    2138 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 16:42:18.935597    2138 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 16:42:19.163121    2138 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 16:42:19.163461    2138 kubeadm.go:322] 
	I0906 16:42:19.163492    2138 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 16:42:19.163494    2138 kubeadm.go:322] 
	I0906 16:42:19.163535    2138 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 16:42:19.163537    2138 kubeadm.go:322] 
	I0906 16:42:19.163552    2138 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 16:42:19.163577    2138 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 16:42:19.163599    2138 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 16:42:19.163600    2138 kubeadm.go:322] 
	I0906 16:42:19.163623    2138 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0906 16:42:19.163625    2138 kubeadm.go:322] 
	I0906 16:42:19.163648    2138 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 16:42:19.163650    2138 kubeadm.go:322] 
	I0906 16:42:19.163671    2138 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 16:42:19.163702    2138 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 16:42:19.163738    2138 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 16:42:19.163739    2138 kubeadm.go:322] 
	I0906 16:42:19.163789    2138 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 16:42:19.163827    2138 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 16:42:19.163828    2138 kubeadm.go:322] 
	I0906 16:42:19.163873    2138 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 600qkm.ckgffmd93ab9oqp5 \
	I0906 16:42:19.163917    2138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5095446c3b17214aaa1a807af40fe852c4809cf7574bda1580a6e046d3ea63e1 \
	I0906 16:42:19.163925    2138 kubeadm.go:322] 	--control-plane 
	I0906 16:42:19.163927    2138 kubeadm.go:322] 
	I0906 16:42:19.163981    2138 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 16:42:19.163982    2138 kubeadm.go:322] 
	I0906 16:42:19.164017    2138 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 600qkm.ckgffmd93ab9oqp5 \
	I0906 16:42:19.164065    2138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:5095446c3b17214aaa1a807af40fe852c4809cf7574bda1580a6e046d3ea63e1 
	I0906 16:42:19.164276    2138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 16:42:19.164281    2138 cni.go:84] Creating CNI manager for ""
	I0906 16:42:19.164288    2138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:42:19.172214    2138 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 16:42:19.176141    2138 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 16:42:19.179420    2138 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0906 16:42:19.184665    2138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 16:42:19.184716    2138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:42:19.184720    2138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=image-320000 minikube.k8s.io/updated_at=2023_09_06T16_42_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:42:19.187999    2138 ops.go:34] apiserver oom_adj: -16
	I0906 16:42:19.253633    2138 kubeadm.go:1081] duration metric: took 68.950417ms to wait for elevateKubeSystemPrivileges.
	I0906 16:42:19.253644    2138 kubeadm.go:406] StartCluster complete in 6.629199s
	I0906 16:42:19.253651    2138 settings.go:142] acquiring lock: {Name:mke09ef7a1e2d249f8e4127472ec9f16828a9cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:19.253731    2138 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:42:19.254061    2138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/kubeconfig: {Name:mk4d1ce1d23510730a8780064cdf633efa514467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:19.254233    2138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 16:42:19.254266    2138 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 16:42:19.254294    2138 addons.go:69] Setting storage-provisioner=true in profile "image-320000"
	I0906 16:42:19.254300    2138 addons.go:231] Setting addon storage-provisioner=true in "image-320000"
	I0906 16:42:19.254322    2138 host.go:66] Checking if "image-320000" exists ...
	I0906 16:42:19.254319    2138 addons.go:69] Setting default-storageclass=true in profile "image-320000"
	I0906 16:42:19.254340    2138 config.go:182] Loaded profile config "image-320000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:42:19.254365    2138 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-320000"
	I0906 16:42:19.259179    2138 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:42:19.263241    2138 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:42:19.263245    2138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 16:42:19.263253    2138 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/id_rsa Username:docker}
	I0906 16:42:19.267756    2138 addons.go:231] Setting addon default-storageclass=true in "image-320000"
	I0906 16:42:19.267770    2138 host.go:66] Checking if "image-320000" exists ...
	I0906 16:42:19.268437    2138 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 16:42:19.268441    2138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 16:42:19.268446    2138 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/image-320000/id_rsa Username:docker}
	I0906 16:42:19.271166    2138 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-320000" context rescaled to 1 replicas
	I0906 16:42:19.271180    2138 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:42:19.275171    2138 out.go:177] * Verifying Kubernetes components...
	I0906 16:42:19.283105    2138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:42:19.304382    2138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 16:42:19.304670    2138 api_server.go:52] waiting for apiserver process to appear ...
	I0906 16:42:19.304709    2138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 16:42:19.308430    2138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:42:19.322525    2138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 16:42:19.736527    2138 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0906 16:42:19.736542    2138 api_server.go:72] duration metric: took 465.362917ms to wait for apiserver process to appear ...
	I0906 16:42:19.736545    2138 api_server.go:88] waiting for apiserver healthz status ...
	I0906 16:42:19.736553    2138 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0906 16:42:19.739835    2138 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0906 16:42:19.740450    2138 api_server.go:141] control plane version: v1.28.1
	I0906 16:42:19.740454    2138 api_server.go:131] duration metric: took 3.907166ms to wait for apiserver health ...
	I0906 16:42:19.740460    2138 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 16:42:19.744058    2138 system_pods.go:59] 4 kube-system pods found
	I0906 16:42:19.744181    2138 system_pods.go:61] "etcd-image-320000" [7b8f18af-f44b-4b82-b487-9771b6f90fcb] Pending
	I0906 16:42:19.744185    2138 system_pods.go:61] "kube-apiserver-image-320000" [9db26ad9-d772-441a-aca2-b24291690661] Pending
	I0906 16:42:19.744187    2138 system_pods.go:61] "kube-controller-manager-image-320000" [15eff6dc-ef87-4095-b047-7c5c5f3282a6] Pending
	I0906 16:42:19.744189    2138 system_pods.go:61] "kube-scheduler-image-320000" [b62cc6ca-4fd4-4adb-858e-ce73255c1d6a] Pending
	I0906 16:42:19.744191    2138 system_pods.go:74] duration metric: took 3.729417ms to wait for pod list to return data ...
	I0906 16:42:19.744194    2138 kubeadm.go:581] duration metric: took 473.015667ms to wait for : map[apiserver:true system_pods:true] ...
	I0906 16:42:19.744200    2138 node_conditions.go:102] verifying NodePressure condition ...
	I0906 16:42:19.745723    2138 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0906 16:42:19.745730    2138 node_conditions.go:123] node cpu capacity is 2
	I0906 16:42:19.745735    2138 node_conditions.go:105] duration metric: took 1.533041ms to run NodePressure ...
	I0906 16:42:19.745739    2138 start.go:228] waiting for startup goroutines ...
	I0906 16:42:19.835093    2138 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0906 16:42:19.839169    2138 addons.go:502] enable addons completed in 584.914833ms: enabled=[default-storageclass storage-provisioner]
	I0906 16:42:19.839185    2138 start.go:233] waiting for cluster config update ...
	I0906 16:42:19.839190    2138 start.go:242] writing updated cluster config ...
	I0906 16:42:19.839489    2138 ssh_runner.go:195] Run: rm -f paused
	I0906 16:42:19.867494    2138 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0906 16:42:19.872076    2138 out.go:177] * Done! kubectl is now configured to use "image-320000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-09-06 23:42:02 UTC, ends at Wed 2023-09-06 23:42:21 UTC. --
	Sep 06 23:42:14 image-320000 cri-dockerd[993]: time="2023-09-06T23:42:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b625a985a4f843dbc5ffcda681513aa2bb37de70149791f1109dc06254e7c08e/resolv.conf as [nameserver 192.168.105.1]"
	Sep 06 23:42:14 image-320000 cri-dockerd[993]: time="2023-09-06T23:42:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee8170b3397718063dae9dee95fda906f39d25f46775737a24978422f9afb550/resolv.conf as [nameserver 192.168.105.1]"
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.942579423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.942637839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.942650048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.942656506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.953242464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.953353881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.953378714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:42:14 image-320000 dockerd[1101]: time="2023-09-06T23:42:14.953399881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:15 image-320000 dockerd[1101]: time="2023-09-06T23:42:15.051044089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:42:15 image-320000 dockerd[1101]: time="2023-09-06T23:42:15.051118506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:15 image-320000 dockerd[1101]: time="2023-09-06T23:42:15.051130881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:42:15 image-320000 dockerd[1101]: time="2023-09-06T23:42:15.051139714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:20 image-320000 dockerd[1095]: time="2023-09-06T23:42:20.532518509Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 06 23:42:20 image-320000 dockerd[1095]: time="2023-09-06T23:42:20.658394800Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 06 23:42:20 image-320000 dockerd[1095]: time="2023-09-06T23:42:20.674434217Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 06 23:42:20 image-320000 dockerd[1101]: time="2023-09-06T23:42:20.714072384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:42:20 image-320000 dockerd[1101]: time="2023-09-06T23:42:20.714101842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:20 image-320000 dockerd[1101]: time="2023-09-06T23:42:20.714287634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:42:20 image-320000 dockerd[1101]: time="2023-09-06T23:42:20.714314967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:42:20 image-320000 dockerd[1101]: time="2023-09-06T23:42:20.853558800Z" level=info msg="shim disconnected" id=ead6248b5d42144e2726bb85dadbed7db657b0a949f5c53c7a91d76114a656c1 namespace=moby
	Sep 06 23:42:20 image-320000 dockerd[1095]: time="2023-09-06T23:42:20.853631967Z" level=info msg="ignoring event" container=ead6248b5d42144e2726bb85dadbed7db657b0a949f5c53c7a91d76114a656c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:42:20 image-320000 dockerd[1101]: time="2023-09-06T23:42:20.853712467Z" level=warning msg="cleaning up after shim disconnected" id=ead6248b5d42144e2726bb85dadbed7db657b0a949f5c53c7a91d76114a656c1 namespace=moby
	Sep 06 23:42:20 image-320000 dockerd[1101]: time="2023-09-06T23:42:20.853721967Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9c1ed5f65fe7b       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   ee8170b339771
	1c4dbf93484fe       b4a5a57e99492       7 seconds ago       Running             kube-scheduler            0                   b625a985a4f84
	dad41413b973e       8b6e1980b7584       7 seconds ago       Running             kube-controller-manager   0                   e2803fbe4dab3
	fffc7766e1231       b29fb62480892       7 seconds ago       Running             kube-apiserver            0                   28bb7f0ccdcb2
	
	* 
	* ==> describe nodes <==
	* Name:               image-320000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-320000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=image-320000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T16_42_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 23:42:16 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-320000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 23:42:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 23:42:18 +0000   Wed, 06 Sep 2023 23:42:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 23:42:18 +0000   Wed, 06 Sep 2023 23:42:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 23:42:18 +0000   Wed, 06 Sep 2023 23:42:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 06 Sep 2023 23:42:18 +0000   Wed, 06 Sep 2023 23:42:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-320000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 95494638695b46c0835db7e4fab3c4dc
	  System UUID:                95494638695b46c0835db7e4fab3c4dc
	  Boot ID:                    cef2b87d-2e19-4033-a877-0b2c8a8d6373
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-320000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-320000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-image-320000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-320000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-320000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-320000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-320000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep 6 23:42] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.664201] EINJ: EINJ table not found.
	[  +0.515484] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044092] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000864] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.121956] systemd-fstab-generator[475]: Ignoring "noauto" for root device
	[  +0.078315] systemd-fstab-generator[486]: Ignoring "noauto" for root device
	[  +0.451352] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.172162] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +0.080304] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[  +0.083825] systemd-fstab-generator[724]: Ignoring "noauto" for root device
	[  +1.151438] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.100889] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[  +0.083119] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[  +0.086787] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.079664] systemd-fstab-generator[944]: Ignoring "noauto" for root device
	[  +0.090429] systemd-fstab-generator[986]: Ignoring "noauto" for root device
	[  +2.529325] systemd-fstab-generator[1088]: Ignoring "noauto" for root device
	[  +3.665425] systemd-fstab-generator[1420]: Ignoring "noauto" for root device
	[  +0.368257] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.259861] systemd-fstab-generator[2275]: Ignoring "noauto" for root device
	[  +2.256686] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [9c1ed5f65fe7] <==
	* {"level":"info","ts":"2023-09-06T23:42:15.211122Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-06T23:42:15.211151Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-06T23:42:15.211174Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T23:42:15.211231Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T23:42:15.211252Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-06T23:42:15.211481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-06T23:42:15.211554Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-06T23:42:15.482622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-06T23:42:15.482676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-06T23:42:15.482708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-06T23:42:15.482737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-06T23:42:15.482762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-06T23:42:15.482786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-06T23:42:15.482819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-06T23:42:15.489242Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T23:42:15.490385Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-320000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-06T23:42:15.490433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T23:42:15.490455Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-06T23:42:15.491117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-06T23:42:15.491134Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-06T23:42:15.490558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-06T23:42:15.504136Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-06T23:42:15.490547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T23:42:15.50418Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-06T23:42:15.5042Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  23:42:21 up 0 min,  0 users,  load average: 0.75, 0.16, 0.05
	Linux image-320000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [fffc7766e123] <==
	* I0906 23:42:16.241127       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 23:42:16.241584       1 controller.go:624] quota admission added evaluator for: namespaces
	I0906 23:42:16.248944       1 shared_informer.go:318] Caches are synced for configmaps
	I0906 23:42:16.249018       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0906 23:42:16.249075       1 aggregator.go:166] initial CRD sync complete...
	I0906 23:42:16.249083       1 autoregister_controller.go:141] Starting autoregister controller
	I0906 23:42:16.249086       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 23:42:16.249089       1 cache.go:39] Caches are synced for autoregister controller
	I0906 23:42:16.249304       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0906 23:42:16.250626       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 23:42:16.262482       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 23:42:16.292069       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0906 23:42:17.143680       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0906 23:42:17.144837       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0906 23:42:17.144845       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 23:42:17.280439       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 23:42:17.297851       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 23:42:17.352358       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0906 23:42:17.355413       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0906 23:42:17.355816       1 controller.go:624] quota admission added evaluator for: endpoints
	I0906 23:42:17.359848       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 23:42:18.181797       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0906 23:42:18.548475       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0906 23:42:18.552558       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0906 23:42:18.560424       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [dad41413b973] <==
	* I0906 23:42:19.831840       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0906 23:42:19.831871       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0906 23:42:19.831877       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0906 23:42:19.981652       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0906 23:42:19.981713       1 gc_controller.go:103] "Starting GC controller"
	I0906 23:42:19.981732       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0906 23:42:20.131501       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0906 23:42:20.131545       1 job_controller.go:226] "Starting job controller"
	I0906 23:42:20.131552       1 shared_informer.go:311] Waiting for caches to sync for job
	I0906 23:42:20.281034       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0906 23:42:20.281080       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0906 23:42:20.281085       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0906 23:42:20.331146       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0906 23:42:20.331155       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0906 23:42:20.331172       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0906 23:42:20.331520       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0906 23:42:20.331524       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0906 23:42:20.331531       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0906 23:42:20.331898       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0906 23:42:20.331903       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0906 23:42:20.331910       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0906 23:42:20.332267       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0906 23:42:20.332295       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0906 23:42:20.332301       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0906 23:42:20.332309       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	
	* 
	* ==> kube-scheduler [1c4dbf93484f] <==
	* W0906 23:42:16.216708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 23:42:16.217014       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0906 23:42:16.216720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 23:42:16.217019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0906 23:42:16.216730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 23:42:16.217023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0906 23:42:16.216739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 23:42:16.217064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 23:42:16.216749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 23:42:16.217086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0906 23:42:16.216760       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 23:42:16.217101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0906 23:42:16.216776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 23:42:16.217136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0906 23:42:17.048738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:42:17.048757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0906 23:42:17.053220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 23:42:17.053232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0906 23:42:17.060389       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 23:42:17.060399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0906 23:42:17.146373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 23:42:17.146390       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0906 23:42:17.163142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 23:42:17.163154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0906 23:42:17.713092       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 23:42:02 UTC, ends at Wed 2023-09-06 23:42:21 UTC. --
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.707824    2281 kubelet_node_status.go:108] "Node was previously registered" node="image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.707908    2281 kubelet_node_status.go:73] "Successfully registered node" node="image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.712640    2281 topology_manager.go:215] "Topology Admit Handler" podUID="ab3eb936fd12dbc62d30d9e6ebf99832" podNamespace="kube-system" podName="kube-controller-manager-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.713028    2281 topology_manager.go:215] "Topology Admit Handler" podUID="f80034075a37ce4919b940a1e8a69a18" podNamespace="kube-system" podName="kube-scheduler-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.713058    2281 topology_manager.go:215] "Topology Admit Handler" podUID="c6477c476c053ba159bd09e7aa2ba582" podNamespace="kube-system" podName="etcd-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.713072    2281 topology_manager.go:215] "Topology Admit Handler" podUID="1ce5080a7105111dbdb297d49e561508" podNamespace="kube-system" podName="kube-apiserver-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: E0906 23:42:18.719619    2281 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-320000\" already exists" pod="kube-system/kube-apiserver-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802634    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab3eb936fd12dbc62d30d9e6ebf99832-ca-certs\") pod \"kube-controller-manager-image-320000\" (UID: \"ab3eb936fd12dbc62d30d9e6ebf99832\") " pod="kube-system/kube-controller-manager-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802654    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab3eb936fd12dbc62d30d9e6ebf99832-flexvolume-dir\") pod \"kube-controller-manager-image-320000\" (UID: \"ab3eb936fd12dbc62d30d9e6ebf99832\") " pod="kube-system/kube-controller-manager-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802667    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab3eb936fd12dbc62d30d9e6ebf99832-k8s-certs\") pod \"kube-controller-manager-image-320000\" (UID: \"ab3eb936fd12dbc62d30d9e6ebf99832\") " pod="kube-system/kube-controller-manager-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802679    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab3eb936fd12dbc62d30d9e6ebf99832-usr-share-ca-certificates\") pod \"kube-controller-manager-image-320000\" (UID: \"ab3eb936fd12dbc62d30d9e6ebf99832\") " pod="kube-system/kube-controller-manager-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802711    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/c6477c476c053ba159bd09e7aa2ba582-etcd-certs\") pod \"etcd-image-320000\" (UID: \"c6477c476c053ba159bd09e7aa2ba582\") " pod="kube-system/etcd-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802722    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ce5080a7105111dbdb297d49e561508-usr-share-ca-certificates\") pod \"kube-apiserver-image-320000\" (UID: \"1ce5080a7105111dbdb297d49e561508\") " pod="kube-system/kube-apiserver-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802732    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab3eb936fd12dbc62d30d9e6ebf99832-kubeconfig\") pod \"kube-controller-manager-image-320000\" (UID: \"ab3eb936fd12dbc62d30d9e6ebf99832\") " pod="kube-system/kube-controller-manager-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802742    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f80034075a37ce4919b940a1e8a69a18-kubeconfig\") pod \"kube-scheduler-image-320000\" (UID: \"f80034075a37ce4919b940a1e8a69a18\") " pod="kube-system/kube-scheduler-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802752    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c6477c476c053ba159bd09e7aa2ba582-etcd-data\") pod \"etcd-image-320000\" (UID: \"c6477c476c053ba159bd09e7aa2ba582\") " pod="kube-system/etcd-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802760    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ce5080a7105111dbdb297d49e561508-ca-certs\") pod \"kube-apiserver-image-320000\" (UID: \"1ce5080a7105111dbdb297d49e561508\") " pod="kube-system/kube-apiserver-image-320000"
	Sep 06 23:42:18 image-320000 kubelet[2281]: I0906 23:42:18.802768    2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ce5080a7105111dbdb297d49e561508-k8s-certs\") pod \"kube-apiserver-image-320000\" (UID: \"1ce5080a7105111dbdb297d49e561508\") " pod="kube-system/kube-apiserver-image-320000"
	Sep 06 23:42:19 image-320000 kubelet[2281]: I0906 23:42:19.583547    2281 apiserver.go:52] "Watching apiserver"
	Sep 06 23:42:19 image-320000 kubelet[2281]: I0906 23:42:19.602065    2281 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 06 23:42:19 image-320000 kubelet[2281]: E0906 23:42:19.670701    2281 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-320000\" already exists" pod="kube-system/kube-apiserver-image-320000"
	Sep 06 23:42:19 image-320000 kubelet[2281]: I0906 23:42:19.676901    2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-320000" podStartSLOduration=2.676862258 podCreationTimestamp="2023-09-06 23:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 23:42:19.672975716 +0000 UTC m=+1.136239043" watchObservedRunningTime="2023-09-06 23:42:19.676862258 +0000 UTC m=+1.140125627"
	Sep 06 23:42:19 image-320000 kubelet[2281]: I0906 23:42:19.688005    2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-320000" podStartSLOduration=1.687973508 podCreationTimestamp="2023-09-06 23:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 23:42:19.676997592 +0000 UTC m=+1.140260960" watchObservedRunningTime="2023-09-06 23:42:19.687973508 +0000 UTC m=+1.151236877"
	Sep 06 23:42:19 image-320000 kubelet[2281]: I0906 23:42:19.688032    2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-320000" podStartSLOduration=1.6880253 podCreationTimestamp="2023-09-06 23:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 23:42:19.68682005 +0000 UTC m=+1.150083419" watchObservedRunningTime="2023-09-06 23:42:19.6880253 +0000 UTC m=+1.151288669"
	Sep 06 23:42:19 image-320000 kubelet[2281]: I0906 23:42:19.694790    2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-320000" podStartSLOduration=1.694771758 podCreationTimestamp="2023-09-06 23:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 23:42:19.691017717 +0000 UTC m=+1.154281085" watchObservedRunningTime="2023-09-06 23:42:19.694771758 +0000 UTC m=+1.158035127"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-320000 -n image-320000
helpers_test.go:261: (dbg) Run:  kubectl --context image-320000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-320000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-320000 describe pod storage-provisioner: exit status 1 (40.128584ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-320000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-208000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-208000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.283930209s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-208000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-208000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8919119a-428c-4976-b8f8-885c779a467a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8919119a-428c-4976-b8f8-885c779a467a] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.016442667s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-208000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.040857333s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 addons disable ingress-dns --alsologtostderr -v=1: (9.877696583s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 addons disable ingress --alsologtostderr -v=1: (7.125514083s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-208000 -n ingress-addon-legacy-208000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                       | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | -p functional-526000                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| update-context | functional-526000                        | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-526000                        | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-526000                        | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-526000                        | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-526000                        | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-526000 ssh pgrep              | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-526000 image build -t         | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | localhost/my-image:functional-526000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-526000 image ls               | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	| image          | functional-526000                        | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-526000                        | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-526000                     | functional-526000           | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:41 PDT |
	| start          | -p image-320000 --driver=qemu2           | image-320000                | jenkins | v1.31.2 | 06 Sep 23 16:41 PDT | 06 Sep 23 16:42 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-320000                | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:42 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-320000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-320000                | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:42 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-320000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-320000                | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:42 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-320000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-320000                | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:42 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-320000                          |                             |         |         |                     |                     |
	| delete         | -p image-320000                          | image-320000                | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:42 PDT |
	| start          | -p ingress-addon-legacy-208000           | ingress-addon-legacy-208000 | jenkins | v1.31.2 | 06 Sep 23 16:42 PDT | 06 Sep 23 16:43 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-208000              | ingress-addon-legacy-208000 | jenkins | v1.31.2 | 06 Sep 23 16:43 PDT | 06 Sep 23 16:43 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-208000              | ingress-addon-legacy-208000 | jenkins | v1.31.2 | 06 Sep 23 16:43 PDT | 06 Sep 23 16:43 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-208000              | ingress-addon-legacy-208000 | jenkins | v1.31.2 | 06 Sep 23 16:44 PDT | 06 Sep 23 16:44 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-208000 ip           | ingress-addon-legacy-208000 | jenkins | v1.31.2 | 06 Sep 23 16:44 PDT | 06 Sep 23 16:44 PDT |
	| addons         | ingress-addon-legacy-208000              | ingress-addon-legacy-208000 | jenkins | v1.31.2 | 06 Sep 23 16:44 PDT | 06 Sep 23 16:44 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-208000              | ingress-addon-legacy-208000 | jenkins | v1.31.2 | 06 Sep 23 16:44 PDT | 06 Sep 23 16:44 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 16:42:22
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 16:42:22.362064    2174 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:42:22.362192    2174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:42:22.362195    2174 out.go:309] Setting ErrFile to fd 2...
	I0906 16:42:22.362198    2174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:42:22.362312    2174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:42:22.363302    2174 out.go:303] Setting JSON to false
	I0906 16:42:22.378520    2174 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":716,"bootTime":1694043026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:42:22.378592    2174 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:42:22.381547    2174 out.go:177] * [ingress-addon-legacy-208000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:42:22.388497    2174 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:42:22.392247    2174 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:42:22.388536    2174 notify.go:220] Checking for updates...
	I0906 16:42:22.397471    2174 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:42:22.400472    2174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:42:22.403444    2174 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:42:22.406472    2174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:42:22.409672    2174 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:42:22.413403    2174 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:42:22.420478    2174 start.go:298] selected driver: qemu2
	I0906 16:42:22.420483    2174 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:42:22.420493    2174 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:42:22.422421    2174 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:42:22.425441    2174 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:42:22.428501    2174 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:42:22.428526    2174 cni.go:84] Creating CNI manager for ""
	I0906 16:42:22.428533    2174 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:42:22.428537    2174 start_flags.go:321] config:
	{Name:ingress-addon-legacy-208000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:42:22.434123    2174 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:42:22.439501    2174 out.go:177] * Starting control plane node ingress-addon-legacy-208000 in cluster ingress-addon-legacy-208000
	I0906 16:42:22.443456    2174 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 16:42:22.505310    2174 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0906 16:42:22.505329    2174 cache.go:57] Caching tarball of preloaded images
	I0906 16:42:22.505515    2174 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 16:42:22.510507    2174 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0906 16:42:22.518476    2174 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:42:22.598641    2174 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0906 16:42:28.594423    2174 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:42:28.594558    2174 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:42:29.342725    2174 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0906 16:42:29.342910    2174 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/config.json ...
	I0906 16:42:29.342936    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/config.json: {Name:mke9c192635a968853d1e8eaf81df7caf1236060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:29.343186    2174 start.go:365] acquiring machines lock for ingress-addon-legacy-208000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:42:29.343213    2174 start.go:369] acquired machines lock for "ingress-addon-legacy-208000" in 21.292µs
	I0906 16:42:29.343223    2174 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:42:29.343263    2174 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:42:29.348306    2174 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0906 16:42:29.362809    2174 start.go:159] libmachine.API.Create for "ingress-addon-legacy-208000" (driver="qemu2")
	I0906 16:42:29.362839    2174 client.go:168] LocalClient.Create starting
	I0906 16:42:29.362914    2174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:42:29.362939    2174 main.go:141] libmachine: Decoding PEM data...
	I0906 16:42:29.362950    2174 main.go:141] libmachine: Parsing certificate...
	I0906 16:42:29.362990    2174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:42:29.363008    2174 main.go:141] libmachine: Decoding PEM data...
	I0906 16:42:29.363013    2174 main.go:141] libmachine: Parsing certificate...
	I0906 16:42:29.363343    2174 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:42:29.481584    2174 main.go:141] libmachine: Creating SSH key...
	I0906 16:42:29.574868    2174 main.go:141] libmachine: Creating Disk image...
	I0906 16:42:29.574874    2174 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:42:29.575004    2174 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/disk.qcow2
	I0906 16:42:29.583430    2174 main.go:141] libmachine: STDOUT: 
	I0906 16:42:29.583444    2174 main.go:141] libmachine: STDERR: 
	I0906 16:42:29.583501    2174 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/disk.qcow2 +20000M
	I0906 16:42:29.590672    2174 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:42:29.590684    2174 main.go:141] libmachine: STDERR: 
	I0906 16:42:29.590696    2174 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/disk.qcow2
	I0906 16:42:29.590705    2174 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:42:29.590739    2174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:ca:fc:fe:d5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/disk.qcow2
	I0906 16:42:29.624695    2174 main.go:141] libmachine: STDOUT: 
	I0906 16:42:29.624723    2174 main.go:141] libmachine: STDERR: 
	I0906 16:42:29.624727    2174 main.go:141] libmachine: Attempt 0
	I0906 16:42:29.624737    2174 main.go:141] libmachine: Searching for 2e:ca:fc:fe:d5:84 in /var/db/dhcpd_leases ...
	I0906 16:42:29.624814    2174 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 16:42:29.624836    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:3e:90:d3:f1:f8:6 ID:1,3e:90:d3:f1:f8:6 Lease:0x64fa5fca}
	I0906 16:42:29.624844    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:42:29.624850    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:42:29.624855    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:31.626963    2174 main.go:141] libmachine: Attempt 1
	I0906 16:42:31.627085    2174 main.go:141] libmachine: Searching for 2e:ca:fc:fe:d5:84 in /var/db/dhcpd_leases ...
	I0906 16:42:31.627358    2174 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 16:42:31.627411    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:3e:90:d3:f1:f8:6 ID:1,3e:90:d3:f1:f8:6 Lease:0x64fa5fca}
	I0906 16:42:31.627459    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:42:31.627492    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:42:31.627522    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:33.628876    2174 main.go:141] libmachine: Attempt 2
	I0906 16:42:33.628929    2174 main.go:141] libmachine: Searching for 2e:ca:fc:fe:d5:84 in /var/db/dhcpd_leases ...
	I0906 16:42:33.629042    2174 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 16:42:33.629054    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:3e:90:d3:f1:f8:6 ID:1,3e:90:d3:f1:f8:6 Lease:0x64fa5fca}
	I0906 16:42:33.629061    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:42:33.629066    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:42:33.629071    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:35.631082    2174 main.go:141] libmachine: Attempt 3
	I0906 16:42:35.631102    2174 main.go:141] libmachine: Searching for 2e:ca:fc:fe:d5:84 in /var/db/dhcpd_leases ...
	I0906 16:42:35.631186    2174 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 16:42:35.631195    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:3e:90:d3:f1:f8:6 ID:1,3e:90:d3:f1:f8:6 Lease:0x64fa5fca}
	I0906 16:42:35.631200    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:42:35.631224    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:42:35.631229    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:37.633216    2174 main.go:141] libmachine: Attempt 4
	I0906 16:42:37.633226    2174 main.go:141] libmachine: Searching for 2e:ca:fc:fe:d5:84 in /var/db/dhcpd_leases ...
	I0906 16:42:37.633257    2174 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 16:42:37.633265    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:3e:90:d3:f1:f8:6 ID:1,3e:90:d3:f1:f8:6 Lease:0x64fa5fca}
	I0906 16:42:37.633271    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:42:37.633276    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:42:37.633283    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:39.635319    2174 main.go:141] libmachine: Attempt 5
	I0906 16:42:39.635336    2174 main.go:141] libmachine: Searching for 2e:ca:fc:fe:d5:84 in /var/db/dhcpd_leases ...
	I0906 16:42:39.635412    2174 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0906 16:42:39.635422    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:3e:90:d3:f1:f8:6 ID:1,3e:90:d3:f1:f8:6 Lease:0x64fa5fca}
	I0906 16:42:39.635429    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:da:1f:56:93:80:a0 ID:1,da:1f:56:93:80:a0 Lease:0x64fa5f0d}
	I0906 16:42:39.635435    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:3e:f2:cf:7:dc:7c ID:1,3e:f2:cf:7:dc:7c Lease:0x64f90d80}
	I0906 16:42:39.635442    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:de:51:c7:76:4e ID:1,3e:de:51:c7:76:4e Lease:0x64fa5ebc}
	I0906 16:42:41.637499    2174 main.go:141] libmachine: Attempt 6
	I0906 16:42:41.637553    2174 main.go:141] libmachine: Searching for 2e:ca:fc:fe:d5:84 in /var/db/dhcpd_leases ...
	I0906 16:42:41.637716    2174 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0906 16:42:41.637737    2174 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:2e:ca:fc:fe:d5:84 ID:1,2e:ca:fc:fe:d5:84 Lease:0x64fa5ff0}
	I0906 16:42:41.637750    2174 main.go:141] libmachine: Found match: 2e:ca:fc:fe:d5:84
	I0906 16:42:41.637764    2174 main.go:141] libmachine: IP: 192.168.105.6
	I0906 16:42:41.637772    2174 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0906 16:42:43.658758    2174 machine.go:88] provisioning docker machine ...
	I0906 16:42:43.658818    2174 buildroot.go:166] provisioning hostname "ingress-addon-legacy-208000"
	I0906 16:42:43.659012    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:43.659848    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e63b0] 0x1028e8e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 16:42:43.659874    2174 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-208000 && echo "ingress-addon-legacy-208000" | sudo tee /etc/hostname
	I0906 16:42:43.767194    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-208000
	
	I0906 16:42:43.767321    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:43.767829    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e63b0] 0x1028e8e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 16:42:43.767854    2174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-208000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-208000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-208000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 16:42:43.850916    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 16:42:43.850935    2174 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17174-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17174-979/.minikube}
	I0906 16:42:43.850952    2174 buildroot.go:174] setting up certificates
	I0906 16:42:43.850960    2174 provision.go:83] configureAuth start
	I0906 16:42:43.850965    2174 provision.go:138] copyHostCerts
	I0906 16:42:43.851010    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem
	I0906 16:42:43.851082    2174 exec_runner.go:144] found /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem, removing ...
	I0906 16:42:43.851091    2174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem
	I0906 16:42:43.851326    2174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/ca.pem (1082 bytes)
	I0906 16:42:43.851580    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem
	I0906 16:42:43.851628    2174 exec_runner.go:144] found /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem, removing ...
	I0906 16:42:43.851635    2174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem
	I0906 16:42:43.851711    2174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/cert.pem (1123 bytes)
	I0906 16:42:43.851959    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem
	I0906 16:42:43.852007    2174 exec_runner.go:144] found /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem, removing ...
	I0906 16:42:43.852013    2174 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem
	I0906 16:42:43.852113    2174 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17174-979/.minikube/key.pem (1679 bytes)
	I0906 16:42:43.852274    2174 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-208000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-208000]
	I0906 16:42:43.910361    2174 provision.go:172] copyRemoteCerts
	I0906 16:42:43.910420    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 16:42:43.910430    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/id_rsa Username:docker}
	I0906 16:42:43.947267    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 16:42:43.947322    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0906 16:42:43.954892    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 16:42:43.954932    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0906 16:42:43.962440    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 16:42:43.962489    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 16:42:43.969498    2174 provision.go:86] duration metric: configureAuth took 118.532708ms
	I0906 16:42:43.969505    2174 buildroot.go:189] setting minikube options for container-runtime
	I0906 16:42:43.969610    2174 config.go:182] Loaded profile config "ingress-addon-legacy-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0906 16:42:43.969643    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:43.969856    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e63b0] 0x1028e8e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 16:42:43.969864    2174 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 16:42:44.038508    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 16:42:44.038518    2174 buildroot.go:70] root file system type: tmpfs
	I0906 16:42:44.038588    2174 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 16:42:44.038642    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:44.038898    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e63b0] 0x1028e8e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 16:42:44.038935    2174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 16:42:44.114466    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 16:42:44.114515    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:44.114799    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e63b0] 0x1028e8e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 16:42:44.114811    2174 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 16:42:44.450116    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 16:42:44.450130    2174 machine.go:91] provisioned docker machine in 791.361ms
	I0906 16:42:44.450135    2174 client.go:171] LocalClient.Create took 15.087599417s
	I0906 16:42:44.450156    2174 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-208000" took 15.087653459s
	I0906 16:42:44.450162    2174 start.go:300] post-start starting for "ingress-addon-legacy-208000" (driver="qemu2")
	I0906 16:42:44.450168    2174 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 16:42:44.450270    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 16:42:44.450280    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/id_rsa Username:docker}
	I0906 16:42:44.487072    2174 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 16:42:44.488526    2174 info.go:137] Remote host: Buildroot 2021.02.12
	I0906 16:42:44.488534    2174 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17174-979/.minikube/addons for local assets ...
	I0906 16:42:44.488600    2174 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17174-979/.minikube/files for local assets ...
	I0906 16:42:44.488698    2174 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem -> 13972.pem in /etc/ssl/certs
	I0906 16:42:44.488702    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem -> /etc/ssl/certs/13972.pem
	I0906 16:42:44.488811    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 16:42:44.491498    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem --> /etc/ssl/certs/13972.pem (1708 bytes)
	I0906 16:42:44.498780    2174 start.go:303] post-start completed in 48.61425ms
	I0906 16:42:44.499158    2174 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/config.json ...
	I0906 16:42:44.499331    2174 start.go:128] duration metric: createHost completed in 15.156375459s
	I0906 16:42:44.499358    2174 main.go:141] libmachine: Using SSH client type: native
	I0906 16:42:44.499574    2174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e63b0] 0x1028e8e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0906 16:42:44.499579    2174 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0906 16:42:44.568252    2174 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694043764.518355460
	
	I0906 16:42:44.568263    2174 fix.go:206] guest clock: 1694043764.518355460
	I0906 16:42:44.568267    2174 fix.go:219] Guest: 2023-09-06 16:42:44.51835546 -0700 PDT Remote: 2023-09-06 16:42:44.499336 -0700 PDT m=+22.157346459 (delta=19.01946ms)
	I0906 16:42:44.568280    2174 fix.go:190] guest clock delta is within tolerance: 19.01946ms
	I0906 16:42:44.568283    2174 start.go:83] releasing machines lock for "ingress-addon-legacy-208000", held for 15.225377167s
	I0906 16:42:44.568674    2174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 16:42:44.568699    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/id_rsa Username:docker}
	I0906 16:42:44.568675    2174 ssh_runner.go:195] Run: cat /version.json
	I0906 16:42:44.568724    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/id_rsa Username:docker}
	I0906 16:42:44.648931    2174 ssh_runner.go:195] Run: systemctl --version
	I0906 16:42:44.650990    2174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 16:42:44.653009    2174 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 16:42:44.653045    2174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0906 16:42:44.656325    2174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0906 16:42:44.661830    2174 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 16:42:44.661841    2174 start.go:466] detecting cgroup driver to use...
	I0906 16:42:44.661913    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 16:42:44.669180    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0906 16:42:44.672225    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 16:42:44.675263    2174 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 16:42:44.675286    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 16:42:44.678215    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 16:42:44.681308    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 16:42:44.684175    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 16:42:44.687441    2174 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 16:42:44.690962    2174 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 16:42:44.694423    2174 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 16:42:44.697489    2174 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 16:42:44.700154    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:44.775388    2174 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 16:42:44.781690    2174 start.go:466] detecting cgroup driver to use...
	I0906 16:42:44.781740    2174 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 16:42:44.786389    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 16:42:44.792175    2174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 16:42:44.798392    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 16:42:44.802988    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 16:42:44.807718    2174 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 16:42:44.843025    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 16:42:44.848322    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 16:42:44.853571    2174 ssh_runner.go:195] Run: which cri-dockerd
	I0906 16:42:44.854999    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 16:42:44.858158    2174 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0906 16:42:44.863249    2174 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 16:42:44.938806    2174 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 16:42:45.023541    2174 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 16:42:45.023558    2174 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0906 16:42:45.029076    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:45.105132    2174 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 16:42:46.263674    2174 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15855075s)
	I0906 16:42:46.263739    2174 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 16:42:46.273416    2174 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 16:42:46.289117    2174 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.5 ...
	I0906 16:42:46.289255    2174 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0906 16:42:46.290587    2174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 16:42:46.294627    2174 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0906 16:42:46.294671    2174 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 16:42:46.303758    2174 docker.go:636] Got preloaded images: 
	I0906 16:42:46.303769    2174 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0906 16:42:46.303809    2174 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 16:42:46.306772    2174 ssh_runner.go:195] Run: which lz4
	I0906 16:42:46.307856    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0906 16:42:46.307945    2174 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0906 16:42:46.309104    2174 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 16:42:46.309120    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0906 16:42:47.994862    2174 docker.go:600] Took 1.686989 seconds to copy over tarball
	I0906 16:42:47.994921    2174 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 16:42:49.321293    2174 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.326381542s)
	I0906 16:42:49.321307    2174 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 16:42:49.348596    2174 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 16:42:49.352454    2174 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0906 16:42:49.358205    2174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 16:42:49.435606    2174 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 16:42:50.914740    2174 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.479146667s)
	I0906 16:42:50.914823    2174 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 16:42:50.921105    2174 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0906 16:42:50.921112    2174 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0906 16:42:50.921116    2174 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 16:42:50.931499    2174 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0906 16:42:50.931771    2174 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 16:42:50.931854    2174 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:42:50.931987    2174 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0906 16:42:50.932016    2174 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 16:42:50.932030    2174 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0906 16:42:50.932111    2174 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 16:42:50.932683    2174 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 16:42:50.941684    2174 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 16:42:50.943995    2174 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:42:50.944044    2174 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 16:42:50.944048    2174 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0906 16:42:50.944050    2174 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 16:42:50.944070    2174 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0906 16:42:50.944071    2174 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 16:42:50.944079    2174 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0906 16:42:51.472142    2174 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 16:42:51.472320    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0906 16:42:51.479343    2174 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0906 16:42:51.479412    2174 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0906 16:42:51.479453    2174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0906 16:42:51.485775    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0906 16:42:51.744624    2174 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 16:42:51.744736    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 16:42:51.754855    2174 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0906 16:42:51.754885    2174 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 16:42:51.754929    2174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0906 16:42:51.761099    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0906 16:42:51.979306    2174 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0906 16:42:51.979450    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0906 16:42:51.985973    2174 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0906 16:42:51.985996    2174 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0906 16:42:51.986037    2174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0906 16:42:51.992114    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0906 16:42:52.006265    2174 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 16:42:52.006376    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:42:52.016997    2174 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0906 16:42:52.017024    2174 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:42:52.017072    2174 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:42:52.027937    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0906 16:42:52.215764    2174 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 16:42:52.215868    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0906 16:42:52.222135    2174 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0906 16:42:52.222161    2174 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0906 16:42:52.222203    2174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0906 16:42:52.227875    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0906 16:42:52.502813    2174 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0906 16:42:52.502930    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0906 16:42:52.509107    2174 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0906 16:42:52.509136    2174 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0906 16:42:52.509176    2174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0906 16:42:52.515480    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0906 16:42:52.633458    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 16:42:52.641120    2174 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0906 16:42:52.641140    2174 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0906 16:42:52.641176    2174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0906 16:42:52.646187    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0906 16:42:52.812874    2174 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0906 16:42:52.813185    2174 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0906 16:42:52.829541    2174 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0906 16:42:52.829613    2174 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0906 16:42:52.829718    2174 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0906 16:42:52.840726    2174 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0906 16:42:52.840795    2174 cache_images.go:92] LoadImages completed in 1.919711708s
	W0906 16:42:52.840880    2174 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0906 16:42:52.840961    2174 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 16:42:52.855188    2174 cni.go:84] Creating CNI manager for ""
	I0906 16:42:52.855209    2174 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:42:52.855228    2174 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0906 16:42:52.855243    2174 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-208000 NodeName:ingress-addon-legacy-208000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 16:42:52.855383    2174 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-208000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 16:42:52.855486    2174 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-208000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0906 16:42:52.855567    2174 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0906 16:42:52.860500    2174 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 16:42:52.860553    2174 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 16:42:52.864310    2174 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0906 16:42:52.871253    2174 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0906 16:42:52.877516    2174 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0906 16:42:52.883498    2174 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0906 16:42:52.884992    2174 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 16:42:52.888508    2174 certs.go:56] Setting up /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000 for IP: 192.168.105.6
	I0906 16:42:52.888521    2174 certs.go:190] acquiring lock for shared ca certs: {Name:mk43c724e281040fff2ff442572568aeff9573b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:52.888664    2174 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key
	I0906 16:42:52.888702    2174 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key
	I0906 16:42:52.888731    2174 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.key
	I0906 16:42:52.888747    2174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt with IP's: []
	I0906 16:42:52.991428    2174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt ...
	I0906 16:42:52.991432    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: {Name:mk33df828f9bd454216708b7f4b148f0c32d1497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:52.991660    2174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.key ...
	I0906 16:42:52.991664    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.key: {Name:mkd39425dd5e02b99861e2e72ca37eca041fd08d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:52.991778    2174 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.key.b354f644
	I0906 16:42:52.991786    2174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0906 16:42:53.097818    2174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.crt.b354f644 ...
	I0906 16:42:53.097821    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.crt.b354f644: {Name:mk0f61a260f48d6db617af2e3383c773cc03ce08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:53.097968    2174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.key.b354f644 ...
	I0906 16:42:53.097971    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.key.b354f644: {Name:mk9c7f36c89119abf383db32acb1f2eefafb9eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:53.098073    2174 certs.go:337] copying /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.crt
	I0906 16:42:53.098316    2174 certs.go:341] copying /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.key
	I0906 16:42:53.098453    2174 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.key
	I0906 16:42:53.098463    2174 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.crt with IP's: []
	I0906 16:42:53.198981    2174 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.crt ...
	I0906 16:42:53.198987    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.crt: {Name:mked738d20aa55842b8e2fd9c9cef9c3a7cc1668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:53.199153    2174 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.key ...
	I0906 16:42:53.199156    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.key: {Name:mk3555e17b78b8591a55a16dd827cc53a0b0478c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:42:53.199292    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 16:42:53.199311    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 16:42:53.199326    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 16:42:53.199338    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 16:42:53.199350    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 16:42:53.199370    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 16:42:53.199379    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 16:42:53.199392    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 16:42:53.199484    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/1397.pem (1338 bytes)
	W0906 16:42:53.199519    2174 certs.go:433] ignoring /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/1397_empty.pem, impossibly tiny 0 bytes
	I0906 16:42:53.199528    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 16:42:53.199554    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem (1082 bytes)
	I0906 16:42:53.199579    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem (1123 bytes)
	I0906 16:42:53.199599    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/Users/jenkins/minikube-integration/17174-979/.minikube/certs/key.pem (1679 bytes)
	I0906 16:42:53.199647    2174 certs.go:437] found cert: /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem (1708 bytes)
	I0906 16:42:53.199676    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/certs/1397.pem -> /usr/share/ca-certificates/1397.pem
	I0906 16:42:53.199688    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem -> /usr/share/ca-certificates/13972.pem
	I0906 16:42:53.199700    2174 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:42:53.200111    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0906 16:42:53.207691    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 16:42:53.214983    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 16:42:53.222367    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 16:42:53.229834    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 16:42:53.236995    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 16:42:53.243890    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 16:42:53.251269    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 16:42:53.258488    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/certs/1397.pem --> /usr/share/ca-certificates/1397.pem (1338 bytes)
	I0906 16:42:53.265626    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/ssl/certs/13972.pem --> /usr/share/ca-certificates/13972.pem (1708 bytes)
	I0906 16:42:53.272284    2174 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 16:42:53.279242    2174 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 16:42:53.284388    2174 ssh_runner.go:195] Run: openssl version
	I0906 16:42:53.286503    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1397.pem && ln -fs /usr/share/ca-certificates/1397.pem /etc/ssl/certs/1397.pem"
	I0906 16:42:53.289764    2174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1397.pem
	I0906 16:42:53.291320    2174 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep  6 23:38 /usr/share/ca-certificates/1397.pem
	I0906 16:42:53.291351    2174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1397.pem
	I0906 16:42:53.293242    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1397.pem /etc/ssl/certs/51391683.0"
	I0906 16:42:53.296025    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13972.pem && ln -fs /usr/share/ca-certificates/13972.pem /etc/ssl/certs/13972.pem"
	I0906 16:42:53.299369    2174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13972.pem
	I0906 16:42:53.300933    2174 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep  6 23:38 /usr/share/ca-certificates/13972.pem
	I0906 16:42:53.300955    2174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13972.pem
	I0906 16:42:53.302700    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13972.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 16:42:53.305776    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 16:42:53.308593    2174 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:42:53.310034    2174 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep  6 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:42:53.310056    2174 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 16:42:53.311899    2174 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 16:42:53.315304    2174 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0906 16:42:53.316852    2174 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0906 16:42:53.316880    2174 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-208000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-208000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:42:53.316949    2174 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 16:42:53.322557    2174 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 16:42:53.325740    2174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 16:42:53.328310    2174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 16:42:53.331172    2174 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 16:42:53.331183    2174 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0906 16:42:53.358532    2174 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0906 16:42:53.358559    2174 kubeadm.go:322] [preflight] Running pre-flight checks
	I0906 16:42:53.440481    2174 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 16:42:53.440586    2174 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 16:42:53.440648    2174 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 16:42:53.486748    2174 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 16:42:53.487286    2174 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 16:42:53.487309    2174 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0906 16:42:53.569020    2174 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 16:42:53.577188    2174 out.go:204]   - Generating certificates and keys ...
	I0906 16:42:53.577222    2174 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0906 16:42:53.577255    2174 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0906 16:42:53.597474    2174 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 16:42:53.680845    2174 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0906 16:42:53.896380    2174 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0906 16:42:54.008054    2174 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0906 16:42:54.237245    2174 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0906 16:42:54.237314    2174 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-208000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0906 16:42:54.416113    2174 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0906 16:42:54.416187    2174 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-208000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0906 16:42:54.455667    2174 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 16:42:54.716543    2174 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 16:42:54.757193    2174 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0906 16:42:54.757296    2174 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 16:42:54.974745    2174 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 16:42:55.015676    2174 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 16:42:55.436457    2174 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 16:42:55.650986    2174 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 16:42:55.651254    2174 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 16:42:55.659406    2174 out.go:204]   - Booting up control plane ...
	I0906 16:42:55.659463    2174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 16:42:55.659574    2174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 16:42:55.659616    2174 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 16:42:55.659697    2174 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 16:42:55.659762    2174 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 16:43:06.664957    2174 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.006750 seconds
	I0906 16:43:06.665562    2174 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 16:43:06.698392    2174 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 16:43:07.230594    2174 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 16:43:07.230832    2174 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-208000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0906 16:43:07.741047    2174 kubeadm.go:322] [bootstrap-token] Using token: 42ufdx.zr6w6qpfpqrp2sit
	I0906 16:43:07.744097    2174 out.go:204]   - Configuring RBAC rules ...
	I0906 16:43:07.744227    2174 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 16:43:07.746026    2174 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 16:43:07.754956    2174 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 16:43:07.756563    2174 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 16:43:07.758458    2174 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 16:43:07.760933    2174 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 16:43:07.767052    2174 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 16:43:07.961208    2174 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0906 16:43:08.149841    2174 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0906 16:43:08.150699    2174 kubeadm.go:322] 
	I0906 16:43:08.150745    2174 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0906 16:43:08.150759    2174 kubeadm.go:322] 
	I0906 16:43:08.150849    2174 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0906 16:43:08.150864    2174 kubeadm.go:322] 
	I0906 16:43:08.150886    2174 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0906 16:43:08.150943    2174 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 16:43:08.150978    2174 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 16:43:08.150988    2174 kubeadm.go:322] 
	I0906 16:43:08.151050    2174 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0906 16:43:08.151120    2174 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 16:43:08.151192    2174 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 16:43:08.151196    2174 kubeadm.go:322] 
	I0906 16:43:08.151260    2174 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 16:43:08.151330    2174 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0906 16:43:08.151335    2174 kubeadm.go:322] 
	I0906 16:43:08.151401    2174 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 42ufdx.zr6w6qpfpqrp2sit \
	I0906 16:43:08.151476    2174 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5095446c3b17214aaa1a807af40fe852c4809cf7574bda1580a6e046d3ea63e1 \
	I0906 16:43:08.151505    2174 kubeadm.go:322]     --control-plane 
	I0906 16:43:08.151509    2174 kubeadm.go:322] 
	I0906 16:43:08.151571    2174 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0906 16:43:08.151578    2174 kubeadm.go:322] 
	I0906 16:43:08.151635    2174 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 42ufdx.zr6w6qpfpqrp2sit \
	I0906 16:43:08.151709    2174 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:5095446c3b17214aaa1a807af40fe852c4809cf7574bda1580a6e046d3ea63e1 
	I0906 16:43:08.151854    2174 kubeadm.go:322] W0906 23:42:53.307932    1407 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0906 16:43:08.151985    2174 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0906 16:43:08.152069    2174 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
	I0906 16:43:08.152142    2174 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 16:43:08.152233    2174 kubeadm.go:322] W0906 23:42:55.605283    1407 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 16:43:08.152328    2174 kubeadm.go:322] W0906 23:42:55.605983    1407 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0906 16:43:08.152340    2174 cni.go:84] Creating CNI manager for ""
	I0906 16:43:08.152348    2174 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:43:08.152366    2174 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 16:43:08.152454    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:08.152455    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=ingress-addon-legacy-208000 minikube.k8s.io/updated_at=2023_09_06T16_43_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:08.156658    2174 ops.go:34] apiserver oom_adj: -16
	I0906 16:43:08.251992    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:08.284927    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:08.820428    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:09.320410    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:09.820455    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:10.320446    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:10.820436    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:11.320383    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:11.820491    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:12.320295    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:12.820432    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:13.320104    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:13.820376    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:14.320329    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:14.820349    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:15.320401    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:15.820376    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:16.320050    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:16.820414    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:17.320293    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:17.819999    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:18.320204    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:18.820239    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:19.320279    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:19.820234    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:20.320254    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:20.820190    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:21.320253    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:21.820222    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:22.320208    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:22.820226    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:23.320208    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:23.819983    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:24.319843    2174 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 16:43:24.363196    2174 kubeadm.go:1081] duration metric: took 16.211140583s to wait for elevateKubeSystemPrivileges.
	I0906 16:43:24.363208    2174 kubeadm.go:406] StartCluster complete in 31.046964292s
	I0906 16:43:24.363217    2174 settings.go:142] acquiring lock: {Name:mke09ef7a1e2d249f8e4127472ec9f16828a9cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:43:24.363300    2174 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:43:24.363713    2174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/kubeconfig: {Name:mk4d1ce1d23510730a8780064cdf633efa514467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:43:24.363916    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 16:43:24.363973    2174 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0906 16:43:24.364018    2174 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-208000"
	I0906 16:43:24.364028    2174 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-208000"
	I0906 16:43:24.364049    2174 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-208000"
	I0906 16:43:24.364052    2174 host.go:66] Checking if "ingress-addon-legacy-208000" exists ...
	I0906 16:43:24.364057    2174 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-208000"
	I0906 16:43:24.364187    2174 config.go:182] Loaded profile config "ingress-addon-legacy-208000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0906 16:43:24.364175    2174 kapi.go:59] client config for ingress-addon-legacy-208000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.key", CAFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ca1d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 16:43:24.364552    2174 cert_rotation.go:137] Starting client certificate rotation controller
	I0906 16:43:24.365176    2174 kapi.go:59] client config for ingress-addon-legacy-208000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.key", CAFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ca1d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 16:43:24.370020    2174 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:43:24.375009    2174 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:43:24.375016    2174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 16:43:24.375024    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/id_rsa Username:docker}
	I0906 16:43:24.377315    2174 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-208000"
	I0906 16:43:24.377331    2174 host.go:66] Checking if "ingress-addon-legacy-208000" exists ...
	I0906 16:43:24.378037    2174 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 16:43:24.378042    2174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 16:43:24.378047    2174 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/ingress-addon-legacy-208000/id_rsa Username:docker}
	I0906 16:43:24.399600    2174 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-208000" context rescaled to 1 replicas
	I0906 16:43:24.399623    2174 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:43:24.406012    2174 out.go:177] * Verifying Kubernetes components...
	I0906 16:43:24.414996    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:43:24.485003    2174 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 16:43:24.485312    2174 kapi.go:59] client config for ingress-addon-legacy-208000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.key", CAFile:"/Users/jenkins/minikube-integration/17174-979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ca1d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 16:43:24.485449    2174 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-208000" to be "Ready" ...
	I0906 16:43:24.486870    2174 node_ready.go:49] node "ingress-addon-legacy-208000" has status "Ready":"True"
	I0906 16:43:24.486878    2174 node_ready.go:38] duration metric: took 1.420667ms waiting for node "ingress-addon-legacy-208000" to be "Ready" ...
	I0906 16:43:24.486882    2174 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:43:24.490767    2174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-fdlvp" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:24.492135    2174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 16:43:24.501605    2174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 16:43:24.781489    2174 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0906 16:43:24.804670    2174 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 16:43:24.813202    2174 addons.go:502] enable addons completed in 449.235958ms: enabled=[storage-provisioner default-storageclass]
	I0906 16:43:26.500103    2174 pod_ready.go:102] pod "coredns-66bff467f8-fdlvp" in "kube-system" namespace has status "Ready":"False"
	I0906 16:43:28.509916    2174 pod_ready.go:102] pod "coredns-66bff467f8-fdlvp" in "kube-system" namespace has status "Ready":"False"
	I0906 16:43:29.004478    2174 pod_ready.go:92] pod "coredns-66bff467f8-fdlvp" in "kube-system" namespace has status "Ready":"True"
	I0906 16:43:29.004501    2174 pod_ready.go:81] duration metric: took 4.513815042s waiting for pod "coredns-66bff467f8-fdlvp" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.004513    2174 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-gpldx" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.006340    2174 pod_ready.go:97] error getting pod "coredns-66bff467f8-gpldx" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-gpldx" not found
	I0906 16:43:29.006356    2174 pod_ready.go:81] duration metric: took 1.833541ms waiting for pod "coredns-66bff467f8-gpldx" in "kube-system" namespace to be "Ready" ...
	E0906 16:43:29.006365    2174 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-gpldx" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-gpldx" not found
	I0906 16:43:29.006374    2174 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.011176    2174 pod_ready.go:92] pod "etcd-ingress-addon-legacy-208000" in "kube-system" namespace has status "Ready":"True"
	I0906 16:43:29.011185    2174 pod_ready.go:81] duration metric: took 4.805083ms waiting for pod "etcd-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.011192    2174 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.015272    2174 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-208000" in "kube-system" namespace has status "Ready":"True"
	I0906 16:43:29.015283    2174 pod_ready.go:81] duration metric: took 4.084417ms waiting for pod "kube-apiserver-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.015291    2174 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.020672    2174 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-208000" in "kube-system" namespace has status "Ready":"True"
	I0906 16:43:29.020684    2174 pod_ready.go:81] duration metric: took 5.386417ms waiting for pod "kube-controller-manager-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.020691    2174 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9t6zd" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.199397    2174 request.go:629] Waited for 177.066166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-208000
	I0906 16:43:29.206074    2174 pod_ready.go:92] pod "kube-proxy-9t6zd" in "kube-system" namespace has status "Ready":"True"
	I0906 16:43:29.206107    2174 pod_ready.go:81] duration metric: took 185.411208ms waiting for pod "kube-proxy-9t6zd" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.206134    2174 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.399329    2174 request.go:629] Waited for 193.111792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-208000
	I0906 16:43:29.599386    2174 request.go:629] Waited for 192.115917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-208000
	I0906 16:43:29.604458    2174 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-208000" in "kube-system" namespace has status "Ready":"True"
	I0906 16:43:29.604492    2174 pod_ready.go:81] duration metric: took 398.347375ms waiting for pod "kube-scheduler-ingress-addon-legacy-208000" in "kube-system" namespace to be "Ready" ...
	I0906 16:43:29.604511    2174 pod_ready.go:38] duration metric: took 5.117725625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 16:43:29.604547    2174 api_server.go:52] waiting for apiserver process to appear ...
	I0906 16:43:29.604833    2174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 16:43:29.620271    2174 api_server.go:72] duration metric: took 5.220733334s to wait for apiserver process to appear ...
	I0906 16:43:29.620287    2174 api_server.go:88] waiting for apiserver healthz status ...
	I0906 16:43:29.620300    2174 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0906 16:43:29.629141    2174 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0906 16:43:29.630071    2174 api_server.go:141] control plane version: v1.18.20
	I0906 16:43:29.630086    2174 api_server.go:131] duration metric: took 9.794125ms to wait for apiserver health ...
	I0906 16:43:29.630093    2174 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 16:43:29.799366    2174 request.go:629] Waited for 169.170542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0906 16:43:29.810122    2174 system_pods.go:59] 7 kube-system pods found
	I0906 16:43:29.810172    2174 system_pods.go:61] "coredns-66bff467f8-fdlvp" [41362368-c706-4b4e-90d3-d50503a5d550] Running
	I0906 16:43:29.810181    2174 system_pods.go:61] "etcd-ingress-addon-legacy-208000" [8c98eb8e-a98d-4aef-a6c5-14d3534a802f] Running
	I0906 16:43:29.810192    2174 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-208000" [f17ae23f-b5b5-4d2e-9995-f45462364131] Running
	I0906 16:43:29.810202    2174 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-208000" [b44cd014-ba2b-4ca9-8d1f-cf2f55659f00] Running
	I0906 16:43:29.810209    2174 system_pods.go:61] "kube-proxy-9t6zd" [032fdf23-4e2e-4005-ac1c-07a5be1603bd] Running
	I0906 16:43:29.810219    2174 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-208000" [248bb16a-d0cd-48b7-8e39-d5d7c8ab25f2] Running
	I0906 16:43:29.810226    2174 system_pods.go:61] "storage-provisioner" [ff304655-1994-4d73-9b34-2433999024d2] Running
	I0906 16:43:29.810236    2174 system_pods.go:74] duration metric: took 180.136667ms to wait for pod list to return data ...
	I0906 16:43:29.810261    2174 default_sa.go:34] waiting for default service account to be created ...
	I0906 16:43:29.999327    2174 request.go:629] Waited for 188.980875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0906 16:43:30.004426    2174 default_sa.go:45] found service account: "default"
	I0906 16:43:30.004455    2174 default_sa.go:55] duration metric: took 194.187125ms for default service account to be created ...
	I0906 16:43:30.004470    2174 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 16:43:30.199323    2174 request.go:629] Waited for 194.732708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0906 16:43:30.212266    2174 system_pods.go:86] 7 kube-system pods found
	I0906 16:43:30.212301    2174 system_pods.go:89] "coredns-66bff467f8-fdlvp" [41362368-c706-4b4e-90d3-d50503a5d550] Running
	I0906 16:43:30.212314    2174 system_pods.go:89] "etcd-ingress-addon-legacy-208000" [8c98eb8e-a98d-4aef-a6c5-14d3534a802f] Running
	I0906 16:43:30.212325    2174 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-208000" [f17ae23f-b5b5-4d2e-9995-f45462364131] Running
	I0906 16:43:30.212338    2174 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-208000" [b44cd014-ba2b-4ca9-8d1f-cf2f55659f00] Running
	I0906 16:43:30.212352    2174 system_pods.go:89] "kube-proxy-9t6zd" [032fdf23-4e2e-4005-ac1c-07a5be1603bd] Running
	I0906 16:43:30.212364    2174 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-208000" [248bb16a-d0cd-48b7-8e39-d5d7c8ab25f2] Running
	I0906 16:43:30.212374    2174 system_pods.go:89] "storage-provisioner" [ff304655-1994-4d73-9b34-2433999024d2] Running
	I0906 16:43:30.212389    2174 system_pods.go:126] duration metric: took 207.914875ms to wait for k8s-apps to be running ...
	I0906 16:43:30.212402    2174 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 16:43:30.212638    2174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 16:43:30.229642    2174 system_svc.go:56] duration metric: took 17.236916ms WaitForService to wait for kubelet.
	I0906 16:43:30.229664    2174 kubeadm.go:581] duration metric: took 5.830138167s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0906 16:43:30.229684    2174 node_conditions.go:102] verifying NodePressure condition ...
	I0906 16:43:30.399333    2174 request.go:629] Waited for 169.579708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0906 16:43:30.407569    2174 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0906 16:43:30.407626    2174 node_conditions.go:123] node cpu capacity is 2
	I0906 16:43:30.407662    2174 node_conditions.go:105] duration metric: took 177.969375ms to run NodePressure ...
	I0906 16:43:30.407693    2174 start.go:228] waiting for startup goroutines ...
	I0906 16:43:30.407713    2174 start.go:233] waiting for cluster config update ...
	I0906 16:43:30.407742    2174 start.go:242] writing updated cluster config ...
	I0906 16:43:30.409077    2174 ssh_runner.go:195] Run: rm -f paused
	I0906 16:43:30.474623    2174 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0906 16:43:30.478700    2174 out.go:177] 
	W0906 16:43:30.482693    2174 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0906 16:43:30.487575    2174 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0906 16:43:30.494690    2174 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-208000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-09-06 23:42:40 UTC, ends at Wed 2023-09-06 23:44:41 UTC. --
	Sep 06 23:44:12 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:12.406745980Z" level=info msg="shim disconnected" id=bb1226643e0c8e8b692f8c43d345698a59db24d8719c04dd0f6c14084a8e75f7 namespace=moby
	Sep 06 23:44:12 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:12.406773022Z" level=warning msg="cleaning up after shim disconnected" id=bb1226643e0c8e8b692f8c43d345698a59db24d8719c04dd0f6c14084a8e75f7 namespace=moby
	Sep 06 23:44:12 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:12.406777355Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.421927967Z" level=info msg="shim disconnected" id=f9bdb7e81717af7fce9a2d758583ebdec7ae77f5dd6c00434e1bc42df7ffb96a namespace=moby
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.422185761Z" level=warning msg="cleaning up after shim disconnected" id=f9bdb7e81717af7fce9a2d758583ebdec7ae77f5dd6c00434e1bc42df7ffb96a namespace=moby
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.422203636Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1083]: time="2023-09-06T23:44:26.424157486Z" level=info msg="ignoring event" container=f9bdb7e81717af7fce9a2d758583ebdec7ae77f5dd6c00434e1bc42df7ffb96a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.434069239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.434099656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.434107239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.434112906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1083]: time="2023-09-06T23:44:26.475445182Z" level=info msg="ignoring event" container=6a902186569718e67fcd7568d345583bf7e4920da4c2d834fd66a92375377471 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.475512724Z" level=info msg="shim disconnected" id=6a902186569718e67fcd7568d345583bf7e4920da4c2d834fd66a92375377471 namespace=moby
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.475545891Z" level=warning msg="cleaning up after shim disconnected" id=6a902186569718e67fcd7568d345583bf7e4920da4c2d834fd66a92375377471 namespace=moby
	Sep 06 23:44:26 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:26.475550058Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1083]: time="2023-09-06T23:44:36.856237627Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=13699afe5b374b4310c49c463abd8e5e619a5dd5b0b9890866a6c7617b68bf5e
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1083]: time="2023-09-06T23:44:36.861827291Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=13699afe5b374b4310c49c463abd8e5e619a5dd5b0b9890866a6c7617b68bf5e
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1083]: time="2023-09-06T23:44:36.930727153Z" level=info msg="ignoring event" container=13699afe5b374b4310c49c463abd8e5e619a5dd5b0b9890866a6c7617b68bf5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:36.930969488Z" level=info msg="shim disconnected" id=13699afe5b374b4310c49c463abd8e5e619a5dd5b0b9890866a6c7617b68bf5e namespace=moby
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:36.931037155Z" level=warning msg="cleaning up after shim disconnected" id=13699afe5b374b4310c49c463abd8e5e619a5dd5b0b9890866a6c7617b68bf5e namespace=moby
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:36.931043655Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1083]: time="2023-09-06T23:44:36.959949485Z" level=info msg="ignoring event" container=d6e0322d180a86e56a9eaf9e6c5fa8bf06e4e79dba54f5f79cf7aaced6aff984 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:36.960097902Z" level=info msg="shim disconnected" id=d6e0322d180a86e56a9eaf9e6c5fa8bf06e4e79dba54f5f79cf7aaced6aff984 namespace=moby
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:36.960126486Z" level=warning msg="cleaning up after shim disconnected" id=d6e0322d180a86e56a9eaf9e6c5fa8bf06e4e79dba54f5f79cf7aaced6aff984 namespace=moby
	Sep 06 23:44:36 ingress-addon-legacy-208000 dockerd[1089]: time="2023-09-06T23:44:36.960130611Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	6a90218656971       a39a074194753                                                                                                      15 seconds ago       Exited              hello-world-app           2                   47b4d95fd17ab
	8679b14483c01       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      39 seconds ago       Running             nginx                     0                   431d29a6e4863
	13699afe5b374       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   d6e0322d180a8
	a935bcb050e7a       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   bf58aafb5bb1c
	90b540741e408       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   5271300f8f287
	0666f397cd7b8       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   fa1defa03ac5a
	1c9b05408b4f8       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   da19de6f39f64
	ebd7ddfa29e3c       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   98d25a1702dbc
	9a9a87e180f5f       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   ac27a1a8bb875
	57fb47f8df3f2       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   ceac4a5ca94dc
	c95a5adae58a0       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   7c87126dbcca6
	73dda4d2cb691       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   66e015288a26a
	
	* 
	* ==> coredns [1c9b05408b4f] <==
	* [INFO] 172.17.0.1:64194 - 30479 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055084s
	[INFO] 172.17.0.1:64194 - 44003 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027042s
	[INFO] 172.17.0.1:64194 - 19652 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002725s
	[INFO] 172.17.0.1:64194 - 991 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003425s
	[INFO] 172.17.0.1:13906 - 25949 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015208s
	[INFO] 172.17.0.1:13906 - 47174 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000014834s
	[INFO] 172.17.0.1:13906 - 59728 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013917s
	[INFO] 172.17.0.1:13906 - 22674 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010376s
	[INFO] 172.17.0.1:13906 - 63853 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012375s
	[INFO] 172.17.0.1:13906 - 23785 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013167s
	[INFO] 172.17.0.1:13906 - 40195 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014625s
	[INFO] 172.17.0.1:56202 - 8354 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033668s
	[INFO] 172.17.0.1:41558 - 27132 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030875s
	[INFO] 172.17.0.1:41558 - 25685 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000011458s
	[INFO] 172.17.0.1:41558 - 36351 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000095s
	[INFO] 172.17.0.1:41558 - 38036 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009833s
	[INFO] 172.17.0.1:41558 - 23221 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000009292s
	[INFO] 172.17.0.1:41558 - 53058 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001s
	[INFO] 172.17.0.1:41558 - 1735 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012001s
	[INFO] 172.17.0.1:56202 - 27551 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000017375s
	[INFO] 172.17.0.1:56202 - 36474 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000021292s
	[INFO] 172.17.0.1:56202 - 27274 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009125s
	[INFO] 172.17.0.1:56202 - 51196 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008875s
	[INFO] 172.17.0.1:56202 - 3972 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001s
	[INFO] 172.17.0.1:56202 - 20194 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000015042s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-208000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-208000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
	                    minikube.k8s.io/name=ingress-addon-legacy-208000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_06T16_43_08_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 06 Sep 2023 23:43:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-208000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 06 Sep 2023 23:44:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 06 Sep 2023 23:44:14 +0000   Wed, 06 Sep 2023 23:43:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 06 Sep 2023 23:44:14 +0000   Wed, 06 Sep 2023 23:43:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 06 Sep 2023 23:44:14 +0000   Wed, 06 Sep 2023 23:43:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 06 Sep 2023 23:44:14 +0000   Wed, 06 Sep 2023 23:43:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-208000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5d1fcc88b3049a89f82be72d8ec5d68
	  System UUID:                a5d1fcc88b3049a89f82be72d8ec5d68
	  Boot ID:                    6f47d58b-0ed8-4396-974f-171ca6832e2f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-msjcr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 coredns-66bff467f8-fdlvp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     78s
	  kube-system                 etcd-ingress-addon-legacy-208000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-208000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-208000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-proxy-9t6zd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-ingress-addon-legacy-208000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 87s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s   kubelet     Node ingress-addon-legacy-208000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s   kubelet     Node ingress-addon-legacy-208000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s   kubelet     Node ingress-addon-legacy-208000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s   kubelet     Node ingress-addon-legacy-208000 status is now: NodeReady
	  Normal  Starting                 77s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep 6 23:42] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.665816] EINJ: EINJ table not found.
	[  +0.524325] systemd-fstab-generator[116]: Ignoring "noauto" for root device
	[  +0.044093] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000895] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.129280] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.058713] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.462171] systemd-fstab-generator[792]: Ignoring "noauto" for root device
	[  +0.163756] systemd-fstab-generator[828]: Ignoring "noauto" for root device
	[  +0.083898] systemd-fstab-generator[839]: Ignoring "noauto" for root device
	[  +0.081032] systemd-fstab-generator[852]: Ignoring "noauto" for root device
	[  +4.330498] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +1.449901] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.676854] systemd-fstab-generator[1525]: Ignoring "noauto" for root device
	[Sep 6 23:43] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.100780] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.677539] systemd-fstab-generator[2617]: Ignoring "noauto" for root device
	[ +16.239861] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.756568] kauditd_printk_skb: 13 callbacks suppressed
	[  +4.488940] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +23.668023] kauditd_printk_skb: 5 callbacks suppressed
	[Sep 6 23:44] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [c95a5adae58a] <==
	* raft2023/09/06 23:43:02 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/06 23:43:02 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/06 23:43:02 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/06 23:43:02 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-06 23:43:02.777585 W | auth: simple token is not cryptographically signed
	2023-09-06 23:43:02.778570 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-06 23:43:02.781105 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-06 23:43:02.781443 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-06 23:43:02.781927 I | embed: listening for peers on 192.168.105.6:2380
	2023-09-06 23:43:02.782126 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/06 23:43:02 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-06 23:43:02.782373 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/09/06 23:43:03 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/06 23:43:03 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/06 23:43:03 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/06 23:43:03 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/06 23:43:03 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-06 23:43:03.684688 I | etcdserver: published {Name:ingress-addon-legacy-208000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-06 23:43:03.684778 I | embed: ready to serve client requests
	2023-09-06 23:43:03.684857 I | embed: ready to serve client requests
	2023-09-06 23:43:03.685508 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-06 23:43:03.685546 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-06 23:43:03.685603 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-06 23:43:03.686023 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-06 23:43:03.686071 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  23:44:41 up 2 min,  0 users,  load average: 0.85, 0.45, 0.17
	Linux ingress-addon-legacy-208000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9a9a87e180f5] <==
	* I0906 23:43:05.147980       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0906 23:43:05.152216       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0906 23:43:05.224823       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 23:43:05.225468       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0906 23:43:05.235346       1 cache.go:39] Caches are synced for autoregister controller
	I0906 23:43:05.235419       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0906 23:43:05.235526       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 23:43:06.126068       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0906 23:43:06.126828       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 23:43:06.138396       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0906 23:43:06.144721       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0906 23:43:06.144746       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0906 23:43:06.283962       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 23:43:06.294005       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0906 23:43:06.393329       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0906 23:43:06.393824       1 controller.go:609] quota admission added evaluator for: endpoints
	I0906 23:43:06.395490       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 23:43:07.424171       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0906 23:43:07.906827       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0906 23:43:08.094031       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0906 23:43:14.307736       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 23:43:23.485529       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0906 23:43:23.931297       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0906 23:43:30.780137       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0906 23:43:59.081471       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [73dda4d2cb69] <==
	* I0906 23:43:23.879152       1 shared_informer.go:230] Caches are synced for expand 
	I0906 23:43:23.881325       1 shared_informer.go:230] Caches are synced for disruption 
	I0906 23:43:23.881333       1 disruption.go:339] Sending events to api server.
	I0906 23:43:23.929897       1 shared_informer.go:230] Caches are synced for deployment 
	I0906 23:43:23.934213       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"efd00bee-f6ed-4728-be2b-b7687caff678", APIVersion:"apps/v1", ResourceVersion:"192", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0906 23:43:23.935854       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"93bc1bfd-302e-4288-9b0f-75e8a70ab8ff", APIVersion:"apps/v1", ResourceVersion:"317", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-gpldx
	I0906 23:43:23.954116       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"93bc1bfd-302e-4288-9b0f-75e8a70ab8ff", APIVersion:"apps/v1", ResourceVersion:"317", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-fdlvp
	I0906 23:43:23.981082       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 23:43:24.028688       1 shared_informer.go:230] Caches are synced for HPA 
	I0906 23:43:24.029868       1 shared_informer.go:230] Caches are synced for resource quota 
	I0906 23:43:24.036500       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0906 23:43:24.039632       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0906 23:43:24.039665       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0906 23:43:24.049647       1 shared_informer.go:230] Caches are synced for namespace 
	I0906 23:43:24.088866       1 shared_informer.go:230] Caches are synced for service account 
	I0906 23:43:24.388067       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"efd00bee-f6ed-4728-be2b-b7687caff678", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0906 23:43:24.397878       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"93bc1bfd-302e-4288-9b0f-75e8a70ab8ff", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-gpldx
	I0906 23:43:30.782835       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3408f8dc-c7b7-42af-92ef-31d1b899bba6", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-rht6x
	I0906 23:43:30.788906       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"da9a9446-4020-46f8-8b9a-551fbef2a4d7", APIVersion:"batch/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-2h2f4
	I0906 23:43:30.792604       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"57b3c534-de78-412e-b403-6607549061a5", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0906 23:43:30.818441       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"27824b82-0dbc-4376-b2c0-fbd4e8136b0a", APIVersion:"batch/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-htqps
	I0906 23:43:33.819258       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"da9a9446-4020-46f8-8b9a-551fbef2a4d7", APIVersion:"batch/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 23:43:34.863364       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"27824b82-0dbc-4376-b2c0-fbd4e8136b0a", APIVersion:"batch/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0906 23:44:09.377484       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f5ca3f7e-775a-4fd7-a856-41cc5ad38c19", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0906 23:44:09.382262       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"64ede0ff-bc84-42c3-ae24-515874b65789", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-msjcr
	
	* 
	* ==> kube-proxy [ebd7ddfa29e3] <==
	* W0906 23:43:24.031325       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0906 23:43:24.035409       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0906 23:43:24.035432       1 server_others.go:186] Using iptables Proxier.
	I0906 23:43:24.035609       1 server.go:583] Version: v1.18.20
	I0906 23:43:24.039047       1 config.go:315] Starting service config controller
	I0906 23:43:24.039070       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0906 23:43:24.039208       1 config.go:133] Starting endpoints config controller
	I0906 23:43:24.039240       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0906 23:43:24.139497       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0906 23:43:24.139498       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [57fb47f8df3f] <==
	* W0906 23:43:05.155455       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 23:43:05.155473       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 23:43:05.155477       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 23:43:05.155479       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 23:43:05.173915       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 23:43:05.173950       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0906 23:43:05.174830       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 23:43:05.174857       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 23:43:05.175558       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0906 23:43:05.177796       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0906 23:43:05.180086       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 23:43:05.180941       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 23:43:05.181040       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 23:43:05.181093       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 23:43:05.181182       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 23:43:05.181256       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 23:43:05.181623       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 23:43:05.181703       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 23:43:05.181761       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 23:43:05.181803       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 23:43:05.181861       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 23:43:05.181933       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 23:43:06.120280       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 23:43:06.237103       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0906 23:43:06.378262       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-09-06 23:42:40 UTC, ends at Wed 2023-09-06 23:44:42 UTC. --
	Sep 06 23:44:23 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:23.335937    2623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4b48541a97c95126cc0f997055caf2b4bfec8113c6fcadd9da6f6086f2274da2
	Sep 06 23:44:23 ingress-addon-legacy-208000 kubelet[2623]: E0906 23:44:23.337760    2623 pod_workers.go:191] Error syncing pod ab303e62-d7ba-47e9-a07f-3e9457938bd6 ("kube-ingress-dns-minikube_kube-system(ab303e62-d7ba-47e9-a07f-3e9457938bd6)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(ab303e62-d7ba-47e9-a07f-3e9457938bd6)"
	Sep 06 23:44:24 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:24.846379    2623 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-pqdm7" (UniqueName: "kubernetes.io/secret/ab303e62-d7ba-47e9-a07f-3e9457938bd6-minikube-ingress-dns-token-pqdm7") pod "ab303e62-d7ba-47e9-a07f-3e9457938bd6" (UID: "ab303e62-d7ba-47e9-a07f-3e9457938bd6")
	Sep 06 23:44:24 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:24.850133    2623 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab303e62-d7ba-47e9-a07f-3e9457938bd6-minikube-ingress-dns-token-pqdm7" (OuterVolumeSpecName: "minikube-ingress-dns-token-pqdm7") pod "ab303e62-d7ba-47e9-a07f-3e9457938bd6" (UID: "ab303e62-d7ba-47e9-a07f-3e9457938bd6"). InnerVolumeSpecName "minikube-ingress-dns-token-pqdm7". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 23:44:24 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:24.947794    2623 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-pqdm7" (UniqueName: "kubernetes.io/secret/ab303e62-d7ba-47e9-a07f-3e9457938bd6-minikube-ingress-dns-token-pqdm7") on node "ingress-addon-legacy-208000" DevicePath ""
	Sep 06 23:44:26 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:26.337657    2623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bb1226643e0c8e8b692f8c43d345698a59db24d8719c04dd0f6c14084a8e75f7
	Sep 06 23:44:26 ingress-addon-legacy-208000 kubelet[2623]: W0906 23:44:26.486560    2623 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod1e5db6b2-69d0-4a49-ac2f-a4ce5527110d/6a902186569718e67fcd7568d345583bf7e4920da4c2d834fd66a92375377471": none of the resources are being tracked.
	Sep 06 23:44:26 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:26.550316    2623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4b48541a97c95126cc0f997055caf2b4bfec8113c6fcadd9da6f6086f2274da2
	Sep 06 23:44:26 ingress-addon-legacy-208000 kubelet[2623]: W0906 23:44:26.551201    2623 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-msjcr through plugin: invalid network status for
	Sep 06 23:44:26 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:26.554225    2623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 6a902186569718e67fcd7568d345583bf7e4920da4c2d834fd66a92375377471
	Sep 06 23:44:26 ingress-addon-legacy-208000 kubelet[2623]: E0906 23:44:26.554354    2623 pod_workers.go:191] Error syncing pod 1e5db6b2-69d0-4a49-ac2f-a4ce5527110d ("hello-world-app-5f5d8b66bb-msjcr_default(1e5db6b2-69d0-4a49-ac2f-a4ce5527110d)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-msjcr_default(1e5db6b2-69d0-4a49-ac2f-a4ce5527110d)"
	Sep 06 23:44:26 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:26.557817    2623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: bb1226643e0c8e8b692f8c43d345698a59db24d8719c04dd0f6c14084a8e75f7
	Sep 06 23:44:27 ingress-addon-legacy-208000 kubelet[2623]: W0906 23:44:27.559087    2623 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-msjcr through plugin: invalid network status for
	Sep 06 23:44:34 ingress-addon-legacy-208000 kubelet[2623]: E0906 23:44:34.845678    2623 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rht6x.178273edc20cd158", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rht6x", UID:"80dbc972-d6b3-4099-a082-5cdc865770ad", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-208000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1366198b25a7d58, ext:86965163282, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1366198b25a7d58, ext:86965163282, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rht6x.178273edc20cd158" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 23:44:34 ingress-addon-legacy-208000 kubelet[2623]: E0906 23:44:34.853322    2623 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-rht6x.178273edc20cd158", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-rht6x", UID:"80dbc972-d6b3-4099-a082-5cdc865770ad", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-208000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1366198b25a7d58, ext:86965163282, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1366198b298608f, ext:86969219145, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-rht6x.178273edc20cd158" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 06 23:44:37 ingress-addon-legacy-208000 kubelet[2623]: W0906 23:44:37.697106    2623 pod_container_deletor.go:77] Container "d6e0322d180a86e56a9eaf9e6c5fa8bf06e4e79dba54f5f79cf7aaced6aff984" not found in pod's containers
	Sep 06 23:44:38 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:38.335559    2623 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 6a902186569718e67fcd7568d345583bf7e4920da4c2d834fd66a92375377471
	Sep 06 23:44:38 ingress-addon-legacy-208000 kubelet[2623]: E0906 23:44:38.337409    2623 pod_workers.go:191] Error syncing pod 1e5db6b2-69d0-4a49-ac2f-a4ce5527110d ("hello-world-app-5f5d8b66bb-msjcr_default(1e5db6b2-69d0-4a49-ac2f-a4ce5527110d)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-msjcr_default(1e5db6b2-69d0-4a49-ac2f-a4ce5527110d)"
	Sep 06 23:44:39 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:39.017011    2623 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/80dbc972-d6b3-4099-a082-5cdc865770ad-webhook-cert") pod "80dbc972-d6b3-4099-a082-5cdc865770ad" (UID: "80dbc972-d6b3-4099-a082-5cdc865770ad")
	Sep 06 23:44:39 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:39.017889    2623 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-dthr2" (UniqueName: "kubernetes.io/secret/80dbc972-d6b3-4099-a082-5cdc865770ad-ingress-nginx-token-dthr2") pod "80dbc972-d6b3-4099-a082-5cdc865770ad" (UID: "80dbc972-d6b3-4099-a082-5cdc865770ad")
	Sep 06 23:44:39 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:39.030228    2623 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dbc972-d6b3-4099-a082-5cdc865770ad-ingress-nginx-token-dthr2" (OuterVolumeSpecName: "ingress-nginx-token-dthr2") pod "80dbc972-d6b3-4099-a082-5cdc865770ad" (UID: "80dbc972-d6b3-4099-a082-5cdc865770ad"). InnerVolumeSpecName "ingress-nginx-token-dthr2". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 23:44:39 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:39.030662    2623 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dbc972-d6b3-4099-a082-5cdc865770ad-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "80dbc972-d6b3-4099-a082-5cdc865770ad" (UID: "80dbc972-d6b3-4099-a082-5cdc865770ad"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 23:44:39 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:39.118508    2623 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/80dbc972-d6b3-4099-a082-5cdc865770ad-webhook-cert") on node "ingress-addon-legacy-208000" DevicePath ""
	Sep 06 23:44:39 ingress-addon-legacy-208000 kubelet[2623]: I0906 23:44:39.118601    2623 reconciler.go:319] Volume detached for volume "ingress-nginx-token-dthr2" (UniqueName: "kubernetes.io/secret/80dbc972-d6b3-4099-a082-5cdc865770ad-ingress-nginx-token-dthr2") on node "ingress-addon-legacy-208000" DevicePath ""
	Sep 06 23:44:40 ingress-addon-legacy-208000 kubelet[2623]: W0906 23:44:40.359818    2623 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/80dbc972-d6b3-4099-a082-5cdc865770ad/volumes" does not exist
	
	* 
	* ==> storage-provisioner [0666f397cd7b] <==
	* I0906 23:43:26.641188       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 23:43:26.646213       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 23:43:26.646232       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 23:43:26.648989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 23:43:26.649079       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-208000_9fd7a6e1-4666-455c-8bb5-dccce004e0d0!
	I0906 23:43:26.649672       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7d1ad43-9816-49a9-870e-707c23962572", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-208000_9fd7a6e1-4666-455c-8bb5-dccce004e0d0 became leader
	I0906 23:43:26.750795       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-208000_9fd7a6e1-4666-455c-8bb5-dccce004e0d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-208000 -n ingress-addon-legacy-208000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-208000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-668000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-668000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.279418833s)

                                                
                                                
-- stdout --
	* [mount-start-1-668000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-668000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-668000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-668000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-668000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-668000 -n mount-start-1-668000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-668000 -n mount-start-1-668000: exit status 7 (70.856666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-668000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-994000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-994000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.826065542s)

                                                
                                                
-- stdout --
	* [multinode-994000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-994000 in cluster multinode-994000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-994000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:47:00.731550    2469 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:47:00.731910    2469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:47:00.731915    2469 out.go:309] Setting ErrFile to fd 2...
	I0906 16:47:00.731917    2469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:47:00.732081    2469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:47:00.733455    2469 out.go:303] Setting JSON to false
	I0906 16:47:00.748964    2469 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":994,"bootTime":1694043026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:47:00.749030    2469 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:47:00.753051    2469 out.go:177] * [multinode-994000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:47:00.761283    2469 notify.go:220] Checking for updates...
	I0906 16:47:00.765209    2469 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:47:00.768232    2469 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:47:00.771214    2469 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:47:00.774110    2469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:47:00.777229    2469 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:47:00.780227    2469 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:47:00.781672    2469 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:47:00.786208    2469 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:47:00.793022    2469 start.go:298] selected driver: qemu2
	I0906 16:47:00.793027    2469 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:47:00.793034    2469 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:47:00.794936    2469 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:47:00.798255    2469 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:47:00.801284    2469 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:47:00.801304    2469 cni.go:84] Creating CNI manager for ""
	I0906 16:47:00.801308    2469 cni.go:136] 0 nodes found, recommending kindnet
	I0906 16:47:00.801312    2469 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 16:47:00.801317    2469 start_flags.go:321] config:
	{Name:multinode-994000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0906 16:47:00.805472    2469 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:47:00.812221    2469 out.go:177] * Starting control plane node multinode-994000 in cluster multinode-994000
	I0906 16:47:00.816220    2469 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:47:00.816242    2469 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:47:00.816260    2469 cache.go:57] Caching tarball of preloaded images
	I0906 16:47:00.816316    2469 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:47:00.816322    2469 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:47:00.816523    2469 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/multinode-994000/config.json ...
	I0906 16:47:00.816538    2469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/multinode-994000/config.json: {Name:mk37b7a18305baa85d651b659e23451d90a9ccbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:47:00.816790    2469 start.go:365] acquiring machines lock for multinode-994000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:47:00.816825    2469 start.go:369] acquired machines lock for "multinode-994000" in 29.333µs
	I0906 16:47:00.816837    2469 start.go:93] Provisioning new machine with config: &{Name:multinode-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:47:00.816867    2469 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:47:00.824196    2469 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:47:00.839571    2469 start.go:159] libmachine.API.Create for "multinode-994000" (driver="qemu2")
	I0906 16:47:00.839593    2469 client.go:168] LocalClient.Create starting
	I0906 16:47:00.839668    2469 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:47:00.839692    2469 main.go:141] libmachine: Decoding PEM data...
	I0906 16:47:00.839703    2469 main.go:141] libmachine: Parsing certificate...
	I0906 16:47:00.839748    2469 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:47:00.839766    2469 main.go:141] libmachine: Decoding PEM data...
	I0906 16:47:00.839783    2469 main.go:141] libmachine: Parsing certificate...
	I0906 16:47:00.840098    2469 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:47:00.958672    2469 main.go:141] libmachine: Creating SSH key...
	I0906 16:47:01.120397    2469 main.go:141] libmachine: Creating Disk image...
	I0906 16:47:01.120406    2469 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:47:01.120546    2469 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:47:01.129194    2469 main.go:141] libmachine: STDOUT: 
	I0906 16:47:01.129210    2469 main.go:141] libmachine: STDERR: 
	I0906 16:47:01.129271    2469 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2 +20000M
	I0906 16:47:01.136473    2469 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:47:01.136485    2469 main.go:141] libmachine: STDERR: 
	I0906 16:47:01.136500    2469 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:47:01.136505    2469 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:47:01.136540    2469 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:49:e6:03:be:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:47:01.138085    2469 main.go:141] libmachine: STDOUT: 
	I0906 16:47:01.138098    2469 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:47:01.138117    2469 client.go:171] LocalClient.Create took 298.52325ms
	I0906 16:47:03.140244    2469 start.go:128] duration metric: createHost completed in 2.323407791s
	I0906 16:47:03.140308    2469 start.go:83] releasing machines lock for "multinode-994000", held for 2.323521292s
	W0906 16:47:03.140407    2469 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:47:03.151894    2469 out.go:177] * Deleting "multinode-994000" in qemu2 ...
	W0906 16:47:03.172518    2469 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:47:03.172548    2469 start.go:687] Will try again in 5 seconds ...
	I0906 16:47:08.174720    2469 start.go:365] acquiring machines lock for multinode-994000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:47:08.175266    2469 start.go:369] acquired machines lock for "multinode-994000" in 439µs
	I0906 16:47:08.175411    2469 start.go:93] Provisioning new machine with config: &{Name:multinode-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:47:08.175733    2469 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:47:08.184206    2469 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:47:08.232339    2469 start.go:159] libmachine.API.Create for "multinode-994000" (driver="qemu2")
	I0906 16:47:08.232392    2469 client.go:168] LocalClient.Create starting
	I0906 16:47:08.232526    2469 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:47:08.232590    2469 main.go:141] libmachine: Decoding PEM data...
	I0906 16:47:08.232608    2469 main.go:141] libmachine: Parsing certificate...
	I0906 16:47:08.232695    2469 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:47:08.232750    2469 main.go:141] libmachine: Decoding PEM data...
	I0906 16:47:08.232762    2469 main.go:141] libmachine: Parsing certificate...
	I0906 16:47:08.233316    2469 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:47:08.375662    2469 main.go:141] libmachine: Creating SSH key...
	I0906 16:47:08.467132    2469 main.go:141] libmachine: Creating Disk image...
	I0906 16:47:08.467137    2469 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:47:08.467270    2469 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:47:08.475918    2469 main.go:141] libmachine: STDOUT: 
	I0906 16:47:08.475937    2469 main.go:141] libmachine: STDERR: 
	I0906 16:47:08.476195    2469 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2 +20000M
	I0906 16:47:08.484376    2469 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:47:08.484392    2469 main.go:141] libmachine: STDERR: 
	I0906 16:47:08.484403    2469 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:47:08.484411    2469 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:47:08.484450    2469 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:71:44:92:9c:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:47:08.486039    2469 main.go:141] libmachine: STDOUT: 
	I0906 16:47:08.486053    2469 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:47:08.486066    2469 client.go:171] LocalClient.Create took 253.672125ms
	I0906 16:47:10.488198    2469 start.go:128] duration metric: createHost completed in 2.312490708s
	I0906 16:47:10.488250    2469 start.go:83] releasing machines lock for "multinode-994000", held for 2.313007166s
	W0906 16:47:10.488603    2469 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-994000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-994000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:47:10.499346    2469 out.go:177] 
	W0906 16:47:10.503360    2469 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:47:10.503382    2469 out.go:239] * 
	* 
	W0906 16:47:10.506104    2469 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:47:10.518326    2469 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-994000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (67.546584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (69.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.5685ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-994000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- rollout status deployment/busybox: exit status 1 (56.58525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.270375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.396584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.749083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.971625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0906 16:47:16.632383    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.465125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.583958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.810166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.973583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.431375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.745125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.985666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- exec  -- nslookup kubernetes.io: exit status 1 (54.12125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.076125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.222417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.37125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (69.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-994000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.653291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.64875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-994000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-994000 -v 3 --alsologtostderr: exit status 89 (40.582708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-994000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:19.876801    2545 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:19.877004    2545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:19.877006    2545 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:19.877009    2545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:19.877125    2545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:19.877352    2545 mustload.go:65] Loading cluster: multinode-994000
	I0906 16:48:19.877556    2545 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:19.882218    2545 out.go:177] * The control plane node must be running for this command
	I0906 16:48:19.886386    2545 out.go:177]   To start a cluster, run: "minikube start -p multinode-994000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-994000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.163667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-994000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-994000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-994000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.1\",\"ClusterName\":\"multinode-994000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.378292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 status --output json --alsologtostderr: exit status 7 (29.477791ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-994000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:20.050302    2555 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:20.050425    2555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.050428    2555 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:20.050430    2555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.050547    2555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:20.050669    2555 out.go:303] Setting JSON to true
	I0906 16:48:20.050681    2555 mustload.go:65] Loading cluster: multinode-994000
	I0906 16:48:20.050749    2555 notify.go:220] Checking for updates...
	I0906 16:48:20.050859    2555 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:20.050863    2555 status.go:255] checking status of multinode-994000 ...
	I0906 16:48:20.051052    2555 status.go:330] multinode-994000 host status = "Stopped" (err=<nil>)
	I0906 16:48:20.051056    2555 status.go:343] host is not running, skipping remaining checks
	I0906 16:48:20.051058    2555 status.go:257] multinode-994000 status: &{Name:multinode-994000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-994000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (28.750625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 node stop m03: exit status 85 (47.009792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-994000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 status: exit status 7 (29.210333ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr: exit status 7 (28.762084ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:20.184888    2563 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:20.185022    2563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.185025    2563 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:20.185028    2563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.185151    2563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:20.185271    2563 out.go:303] Setting JSON to false
	I0906 16:48:20.185287    2563 mustload.go:65] Loading cluster: multinode-994000
	I0906 16:48:20.185334    2563 notify.go:220] Checking for updates...
	I0906 16:48:20.185468    2563 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:20.185472    2563 status.go:255] checking status of multinode-994000 ...
	I0906 16:48:20.185663    2563 status.go:330] multinode-994000 host status = "Stopped" (err=<nil>)
	I0906 16:48:20.185666    2563 status.go:343] host is not running, skipping remaining checks
	I0906 16:48:20.185669    2563 status.go:257] multinode-994000 status: &{Name:multinode-994000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr": multinode-994000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.116458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 node start m03 --alsologtostderr: exit status 85 (44.401542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:20.243657    2567 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:20.243863    2567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.243866    2567 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:20.243868    2567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.243981    2567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:20.244193    2567 mustload.go:65] Loading cluster: multinode-994000
	I0906 16:48:20.244371    2567 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:20.247624    2567 out.go:177] 
	W0906 16:48:20.250624    2567 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0906 16:48:20.250629    2567 out.go:239] * 
	* 
	W0906 16:48:20.252161    2567 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:48:20.255547    2567 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0906 16:48:20.243657    2567 out.go:296] Setting OutFile to fd 1 ...
I0906 16:48:20.243863    2567 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:48:20.243866    2567 out.go:309] Setting ErrFile to fd 2...
I0906 16:48:20.243868    2567 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:48:20.243981    2567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
I0906 16:48:20.244193    2567 mustload.go:65] Loading cluster: multinode-994000
I0906 16:48:20.244371    2567 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:48:20.247624    2567 out.go:177] 
W0906 16:48:20.250624    2567 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0906 16:48:20.250629    2567 out.go:239] * 
* 
W0906 16:48:20.252161    2567 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0906 16:48:20.255547    2567 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-994000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 status: exit status 7 (28.901625ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-994000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.044917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-994000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-994000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-994000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-994000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.177901042s)

                                                
                                                
-- stdout --
	* [multinode-994000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-994000 in cluster multinode-994000
	* Restarting existing qemu2 VM for "multinode-994000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-994000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:20.432501    2577 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:20.432606    2577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.432610    2577 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:20.432612    2577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:20.432735    2577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:20.433689    2577 out.go:303] Setting JSON to false
	I0906 16:48:20.448675    2577 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1074,"bootTime":1694043026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:48:20.448761    2577 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:48:20.453408    2577 out.go:177] * [multinode-994000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:48:20.460678    2577 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:48:20.464551    2577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:48:20.460743    2577 notify.go:220] Checking for updates...
	I0906 16:48:20.470594    2577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:48:20.473497    2577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:48:20.476549    2577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:48:20.479578    2577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:48:20.481292    2577 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:20.481331    2577 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:48:20.485546    2577 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:48:20.492354    2577 start.go:298] selected driver: qemu2
	I0906 16:48:20.492361    2577 start.go:902] validating driver "qemu2" against &{Name:multinode-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-994000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:48:20.492416    2577 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:48:20.494314    2577 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:48:20.494339    2577 cni.go:84] Creating CNI manager for ""
	I0906 16:48:20.494344    2577 cni.go:136] 1 nodes found, recommending kindnet
	I0906 16:48:20.494348    2577 start_flags.go:321] config:
	{Name:multinode-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:48:20.498357    2577 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:20.505571    2577 out.go:177] * Starting control plane node multinode-994000 in cluster multinode-994000
	I0906 16:48:20.509553    2577 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:48:20.509568    2577 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:48:20.509581    2577 cache.go:57] Caching tarball of preloaded images
	I0906 16:48:20.509642    2577 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:48:20.509649    2577 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:48:20.509714    2577 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/multinode-994000/config.json ...
	I0906 16:48:20.510074    2577 start.go:365] acquiring machines lock for multinode-994000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:48:20.510103    2577 start.go:369] acquired machines lock for "multinode-994000" in 23.375µs
	I0906 16:48:20.510113    2577 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:48:20.510117    2577 fix.go:54] fixHost starting: 
	I0906 16:48:20.510227    2577 fix.go:102] recreateIfNeeded on multinode-994000: state=Stopped err=<nil>
	W0906 16:48:20.510235    2577 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:48:20.518538    2577 out.go:177] * Restarting existing qemu2 VM for "multinode-994000" ...
	I0906 16:48:20.522607    2577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:71:44:92:9c:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:48:20.524438    2577 main.go:141] libmachine: STDOUT: 
	I0906 16:48:20.524458    2577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:48:20.524490    2577 fix.go:56] fixHost completed within 14.371875ms
	I0906 16:48:20.524494    2577 start.go:83] releasing machines lock for "multinode-994000", held for 14.388ms
	W0906 16:48:20.524501    2577 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:48:20.524537    2577 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:48:20.524541    2577 start.go:687] Will try again in 5 seconds ...
	I0906 16:48:25.526579    2577 start.go:365] acquiring machines lock for multinode-994000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:48:25.526910    2577 start.go:369] acquired machines lock for "multinode-994000" in 274.166µs
	I0906 16:48:25.527040    2577 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:48:25.527069    2577 fix.go:54] fixHost starting: 
	I0906 16:48:25.527732    2577 fix.go:102] recreateIfNeeded on multinode-994000: state=Stopped err=<nil>
	W0906 16:48:25.527756    2577 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:48:25.535054    2577 out.go:177] * Restarting existing qemu2 VM for "multinode-994000" ...
	I0906 16:48:25.539270    2577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:71:44:92:9c:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:48:25.547430    2577 main.go:141] libmachine: STDOUT: 
	I0906 16:48:25.547501    2577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:48:25.547579    2577 fix.go:56] fixHost completed within 20.507916ms
	I0906 16:48:25.547595    2577 start.go:83] releasing machines lock for "multinode-994000", held for 20.659958ms
	W0906 16:48:25.547759    2577 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-994000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-994000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:48:25.555140    2577 out.go:177] 
	W0906 16:48:25.559314    2577 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:48:25.559345    2577 out.go:239] * 
	* 
	W0906 16:48:25.561907    2577 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:48:25.570986    2577 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-994000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-994000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (32.463375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 node delete m03: exit status 89 (39.016459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-994000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-994000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr: exit status 7 (29.055ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:25.747083    2591 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:25.747214    2591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:25.747216    2591 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:25.747219    2591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:25.747325    2591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:25.747421    2591 out.go:303] Setting JSON to false
	I0906 16:48:25.747433    2591 mustload.go:65] Loading cluster: multinode-994000
	I0906 16:48:25.747495    2591 notify.go:220] Checking for updates...
	I0906 16:48:25.747596    2591 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:25.747602    2591 status.go:255] checking status of multinode-994000 ...
	I0906 16:48:25.747790    2591 status.go:330] multinode-994000 host status = "Stopped" (err=<nil>)
	I0906 16:48:25.747794    2591 status.go:343] host is not running, skipping remaining checks
	I0906 16:48:25.747796    2591 status.go:257] multinode-994000 status: &{Name:multinode-994000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (28.940291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 status: exit status 7 (29.719375ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr: exit status 7 (28.906125ms)

                                                
                                                
-- stdout --
	multinode-994000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:25.895632    2599 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:25.895803    2599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:25.895806    2599 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:25.895816    2599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:25.895942    2599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:25.896057    2599 out.go:303] Setting JSON to false
	I0906 16:48:25.896068    2599 mustload.go:65] Loading cluster: multinode-994000
	I0906 16:48:25.896126    2599 notify.go:220] Checking for updates...
	I0906 16:48:25.896238    2599 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:25.896246    2599 status.go:255] checking status of multinode-994000 ...
	I0906 16:48:25.896429    2599 status.go:330] multinode-994000 host status = "Stopped" (err=<nil>)
	I0906 16:48:25.896432    2599 status.go:343] host is not running, skipping remaining checks
	I0906 16:48:25.896434    2599 status.go:257] multinode-994000 status: &{Name:multinode-994000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr": multinode-994000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-994000 status --alsologtostderr": multinode-994000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.524625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-994000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-994000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.175493584s)

                                                
                                                
-- stdout --
	* [multinode-994000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-994000 in cluster multinode-994000
	* Restarting existing qemu2 VM for "multinode-994000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-994000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:25.953779    2603 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:25.953899    2603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:25.953902    2603 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:25.953904    2603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:25.954015    2603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:25.954919    2603 out.go:303] Setting JSON to false
	I0906 16:48:25.970283    2603 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1079,"bootTime":1694043026,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:48:25.970335    2603 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:48:25.974720    2603 out.go:177] * [multinode-994000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:48:25.982626    2603 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:48:25.982709    2603 notify.go:220] Checking for updates...
	I0906 16:48:25.986621    2603 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:48:25.989689    2603 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:48:25.992621    2603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:48:25.995575    2603 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:48:25.998677    2603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:48:26.001860    2603 config.go:182] Loaded profile config "multinode-994000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:26.002085    2603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:48:26.006674    2603 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:48:26.013549    2603 start.go:298] selected driver: qemu2
	I0906 16:48:26.013556    2603 start.go:902] validating driver "qemu2" against &{Name:multinode-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-994000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:48:26.013612    2603 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:48:26.015591    2603 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:48:26.015617    2603 cni.go:84] Creating CNI manager for ""
	I0906 16:48:26.015621    2603 cni.go:136] 1 nodes found, recommending kindnet
	I0906 16:48:26.015626    2603 start_flags.go:321] config:
	{Name:multinode-994000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:48:26.019484    2603 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:26.026417    2603 out.go:177] * Starting control plane node multinode-994000 in cluster multinode-994000
	I0906 16:48:26.030589    2603 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:48:26.030614    2603 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:48:26.030623    2603 cache.go:57] Caching tarball of preloaded images
	I0906 16:48:26.030676    2603 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:48:26.030682    2603 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:48:26.030736    2603 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/multinode-994000/config.json ...
	I0906 16:48:26.031047    2603 start.go:365] acquiring machines lock for multinode-994000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:48:26.031074    2603 start.go:369] acquired machines lock for "multinode-994000" in 20.667µs
	I0906 16:48:26.031084    2603 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:48:26.031089    2603 fix.go:54] fixHost starting: 
	I0906 16:48:26.031208    2603 fix.go:102] recreateIfNeeded on multinode-994000: state=Stopped err=<nil>
	W0906 16:48:26.031216    2603 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:48:26.039536    2603 out.go:177] * Restarting existing qemu2 VM for "multinode-994000" ...
	I0906 16:48:26.043667    2603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:71:44:92:9c:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:48:26.045742    2603 main.go:141] libmachine: STDOUT: 
	I0906 16:48:26.045759    2603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:48:26.045789    2603 fix.go:56] fixHost completed within 14.699417ms
	I0906 16:48:26.045795    2603 start.go:83] releasing machines lock for "multinode-994000", held for 14.716833ms
	W0906 16:48:26.045802    2603 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:48:26.045832    2603 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:48:26.045837    2603 start.go:687] Will try again in 5 seconds ...
	I0906 16:48:31.047881    2603 start.go:365] acquiring machines lock for multinode-994000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:48:31.048410    2603 start.go:369] acquired machines lock for "multinode-994000" in 385.916µs
	I0906 16:48:31.048635    2603 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:48:31.048672    2603 fix.go:54] fixHost starting: 
	I0906 16:48:31.049563    2603 fix.go:102] recreateIfNeeded on multinode-994000: state=Stopped err=<nil>
	W0906 16:48:31.049593    2603 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:48:31.057992    2603 out.go:177] * Restarting existing qemu2 VM for "multinode-994000" ...
	I0906 16:48:31.061141    2603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:71:44:92:9c:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/multinode-994000/disk.qcow2
	I0906 16:48:31.070294    2603 main.go:141] libmachine: STDOUT: 
	I0906 16:48:31.070343    2603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:48:31.070446    2603 fix.go:56] fixHost completed within 21.791209ms
	I0906 16:48:31.070462    2603 start.go:83] releasing machines lock for "multinode-994000", held for 21.998333ms
	W0906 16:48:31.070647    2603 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-994000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-994000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:48:31.077052    2603 out.go:177] 
	W0906 16:48:31.081049    2603 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:48:31.081076    2603 out.go:239] * 
	* 
	W0906 16:48:31.083880    2603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:48:31.090029    2603 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-994000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (66.868666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-994000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-994000-m01 --driver=qemu2 
E0906 16:48:38.553181    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-994000-m01 --driver=qemu2 : exit status 80 (9.808074708s)

                                                
                                                
-- stdout --
	* [multinode-994000-m01] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-994000-m01 in cluster multinode-994000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-994000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-994000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-994000-m02 --driver=qemu2 
E0906 16:48:45.626459    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:45.632866    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:45.645041    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:45.666866    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:45.708953    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:45.791047    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:45.953189    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:46.275326    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:46.917799    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:48.200140    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:48:50.761547    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-994000-m02 --driver=qemu2 : exit status 80 (9.787721458s)

                                                
                                                
-- stdout --
	* [multinode-994000-m02] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-994000-m02 in cluster multinode-994000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-994000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-994000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-994000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-994000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-994000: exit status 89 (79.845375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-994000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-994000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-994000 -n multinode-994000: exit status 7 (29.766708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-994000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.84s)

                                                
                                    
x
+
TestPreload (9.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-045000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0906 16:48:55.882030    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-045000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.682445167s)

                                                
                                                
-- stdout --
	* [test-preload-045000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-045000 in cluster test-preload-045000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-045000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:48:51.169219    2658 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:48:51.169357    2658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:51.169360    2658 out.go:309] Setting ErrFile to fd 2...
	I0906 16:48:51.169362    2658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:48:51.169469    2658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:48:51.170449    2658 out.go:303] Setting JSON to false
	I0906 16:48:51.185574    2658 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1105,"bootTime":1694043026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:48:51.185653    2658 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:48:51.191077    2658 out.go:177] * [test-preload-045000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:48:51.198027    2658 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:48:51.198091    2658 notify.go:220] Checking for updates...
	I0906 16:48:51.202081    2658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:48:51.205134    2658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:48:51.208065    2658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:48:51.211094    2658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:48:51.214126    2658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:48:51.215714    2658 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:48:51.215756    2658 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:48:51.220070    2658 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:48:51.226941    2658 start.go:298] selected driver: qemu2
	I0906 16:48:51.226946    2658 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:48:51.226951    2658 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:48:51.228902    2658 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:48:51.232090    2658 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:48:51.235186    2658 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:48:51.235216    2658 cni.go:84] Creating CNI manager for ""
	I0906 16:48:51.235231    2658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:48:51.235235    2658 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:48:51.235240    2658 start_flags.go:321] config:
	{Name:test-preload-045000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-045000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:48:51.239321    2658 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.246067    2658 out.go:177] * Starting control plane node test-preload-045000 in cluster test-preload-045000
	I0906 16:48:51.250143    2658 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0906 16:48:51.250220    2658 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/test-preload-045000/config.json ...
	I0906 16:48:51.250238    2658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/test-preload-045000/config.json: {Name:mke6a9d9805ca46fb8d8c5bd7b6417200ae294c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:48:51.250244    2658 cache.go:107] acquiring lock: {Name:mk1f6a556529b28267c0ce8bc4cb4fdcd11f223f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250256    2658 cache.go:107] acquiring lock: {Name:mk2d23a050dfe447714b904a97ba056417a371bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250254    2658 cache.go:107] acquiring lock: {Name:mk1bad6bf52a1102d5e87c28ff803ceeedea07c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250487    2658 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:48:51.250489    2658 cache.go:107] acquiring lock: {Name:mk2660681673281a054fcdf652845ab49ef0d94f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250504    2658 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 16:48:51.250528    2658 cache.go:107] acquiring lock: {Name:mk35c44aae1d63f8c3e0a5728596447ec2a95e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250596    2658 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 16:48:51.250602    2658 cache.go:107] acquiring lock: {Name:mk8dbc3a90efb54f539b3cffedb24bbec4ac4fb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250611    2658 cache.go:107] acquiring lock: {Name:mk86e98642703a0fe43d4f00bfe92612db0476b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250637    2658 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 16:48:51.250684    2658 start.go:365] acquiring machines lock for test-preload-045000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:48:51.250603    2658 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0906 16:48:51.250688    2658 cache.go:107] acquiring lock: {Name:mk026c98bfb10a2ca656989744a7d54e7b20a572 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:48:51.250704    2658 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0906 16:48:51.250718    2658 start.go:369] acquired machines lock for "test-preload-045000" in 28.542µs
	I0906 16:48:51.250685    2658 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 16:48:51.250730    2658 start.go:93] Provisioning new machine with config: &{Name:test-preload-045000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-045000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:48:51.250778    2658 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:48:51.250802    2658 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 16:48:51.259013    2658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:48:51.263964    2658 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 16:48:51.263973    2658 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0906 16:48:51.264698    2658 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 16:48:51.265571    2658 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0906 16:48:51.265683    2658 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 16:48:51.265868    2658 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 16:48:51.268199    2658 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 16:48:51.268582    2658 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 16:48:51.274463    2658 start.go:159] libmachine.API.Create for "test-preload-045000" (driver="qemu2")
	I0906 16:48:51.274475    2658 client.go:168] LocalClient.Create starting
	I0906 16:48:51.274541    2658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:48:51.274566    2658 main.go:141] libmachine: Decoding PEM data...
	I0906 16:48:51.274577    2658 main.go:141] libmachine: Parsing certificate...
	I0906 16:48:51.274616    2658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:48:51.274634    2658 main.go:141] libmachine: Decoding PEM data...
	I0906 16:48:51.274644    2658 main.go:141] libmachine: Parsing certificate...
	I0906 16:48:51.274928    2658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:48:51.387206    2658 main.go:141] libmachine: Creating SSH key...
	I0906 16:48:51.462404    2658 main.go:141] libmachine: Creating Disk image...
	I0906 16:48:51.462413    2658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:48:51.462536    2658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2
	I0906 16:48:51.471317    2658 main.go:141] libmachine: STDOUT: 
	I0906 16:48:51.471339    2658 main.go:141] libmachine: STDERR: 
	I0906 16:48:51.471406    2658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2 +20000M
	I0906 16:48:51.478908    2658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:48:51.478925    2658 main.go:141] libmachine: STDERR: 
	I0906 16:48:51.478942    2658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2
	I0906 16:48:51.478947    2658 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:48:51.478984    2658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:6b:e1:80:dd:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2
	I0906 16:48:51.480629    2658 main.go:141] libmachine: STDOUT: 
	I0906 16:48:51.480643    2658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:48:51.480661    2658 client.go:171] LocalClient.Create took 206.185208ms
	I0906 16:48:52.136427    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0906 16:48:52.184742    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0906 16:48:52.339508    2658 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0906 16:48:52.339531    2658 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.089136542s
	I0906 16:48:52.339540    2658 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0906 16:48:52.404619    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0906 16:48:52.595022    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0906 16:48:53.094455    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0906 16:48:53.214254    2658 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0906 16:48:53.214280    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 16:48:53.302474    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0906 16:48:53.454585    2658 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 16:48:53.454613    2658 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.204414708s
	I0906 16:48:53.454625    2658 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 16:48:53.480737    2658 start.go:128] duration metric: createHost completed in 2.229996875s
	I0906 16:48:53.480754    2658 start.go:83] releasing machines lock for "test-preload-045000", held for 2.230075875s
	W0906 16:48:53.480774    2658 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:48:53.490885    2658 out.go:177] * Deleting "test-preload-045000" in qemu2 ...
	W0906 16:48:53.487114    2658 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0906 16:48:53.493832    2658 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0906 16:48:53.504195    2658 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:48:53.504206    2658 start.go:687] Will try again in 5 seconds ...
	I0906 16:48:54.867827    2658 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0906 16:48:54.867889    2658 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.61736375s
	I0906 16:48:54.867923    2658 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0906 16:48:55.359311    2658 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0906 16:48:55.359381    2658 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.108962083s
	I0906 16:48:55.359419    2658 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0906 16:48:56.416357    2658 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0906 16:48:56.416406    2658 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.166261459s
	I0906 16:48:56.416430    2658 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0906 16:48:57.672049    2658 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0906 16:48:57.672100    2658 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.421975291s
	I0906 16:48:57.672129    2658 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0906 16:48:57.845350    2658 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0906 16:48:57.845418    2658 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.594936291s
	I0906 16:48:57.845453    2658 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0906 16:48:58.504614    2658 start.go:365] acquiring machines lock for test-preload-045000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:48:58.505144    2658 start.go:369] acquired machines lock for "test-preload-045000" in 445.125µs
	I0906 16:48:58.505274    2658 start.go:93] Provisioning new machine with config: &{Name:test-preload-045000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-045000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:48:58.505572    2658 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:48:58.515164    2658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:48:58.564050    2658 start.go:159] libmachine.API.Create for "test-preload-045000" (driver="qemu2")
	I0906 16:48:58.564100    2658 client.go:168] LocalClient.Create starting
	I0906 16:48:58.564214    2658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:48:58.564261    2658 main.go:141] libmachine: Decoding PEM data...
	I0906 16:48:58.564283    2658 main.go:141] libmachine: Parsing certificate...
	I0906 16:48:58.564347    2658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:48:58.564381    2658 main.go:141] libmachine: Decoding PEM data...
	I0906 16:48:58.564396    2658 main.go:141] libmachine: Parsing certificate...
	I0906 16:48:58.565356    2658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:48:58.693638    2658 main.go:141] libmachine: Creating SSH key...
	I0906 16:48:58.764507    2658 main.go:141] libmachine: Creating Disk image...
	I0906 16:48:58.764513    2658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:48:58.764663    2658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2
	I0906 16:48:58.773195    2658 main.go:141] libmachine: STDOUT: 
	I0906 16:48:58.773210    2658 main.go:141] libmachine: STDERR: 
	I0906 16:48:58.773271    2658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2 +20000M
	I0906 16:48:58.780503    2658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:48:58.780516    2658 main.go:141] libmachine: STDERR: 
	I0906 16:48:58.780529    2658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2
	I0906 16:48:58.780545    2658 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:48:58.780593    2658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:08:b5:e3:b9:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/test-preload-045000/disk.qcow2
	I0906 16:48:58.782183    2658 main.go:141] libmachine: STDOUT: 
	I0906 16:48:58.782197    2658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:48:58.782210    2658 client.go:171] LocalClient.Create took 218.103459ms
	I0906 16:49:00.782679    2658 start.go:128] duration metric: createHost completed in 2.277065667s
	I0906 16:49:00.782748    2658 start.go:83] releasing machines lock for "test-preload-045000", held for 2.277627958s
	W0906 16:49:00.783013    2658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-045000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-045000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:49:00.793614    2658 out.go:177] 
	W0906 16:49:00.797690    2658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:49:00.797733    2658 out.go:239] * 
	* 
	W0906 16:49:00.800193    2658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:49:00.810530    2658 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-045000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-09-06 16:49:00.825949 -0700 PDT m=+732.098306501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-045000 -n test-preload-045000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-045000 -n test-preload-045000: exit status 7 (66.307167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-045000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-045000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-045000
--- FAIL: TestPreload (9.85s)

                                                
                                    
x
+
TestScheduledStopUnix (10.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-146000 --memory=2048 --driver=qemu2 
E0906 16:49:06.124439    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-146000 --memory=2048 --driver=qemu2 : exit status 80 (9.885016875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-146000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-146000 in cluster scheduled-stop-146000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-146000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-146000 in cluster scheduled-stop-146000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-146000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-146000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-09-06 16:49:10.878744 -0700 PDT m=+742.151306834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-146000 -n scheduled-stop-146000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-146000 -n scheduled-stop-146000: exit status 7 (69.189917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-146000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-146000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-146000
--- FAIL: TestScheduledStopUnix (10.05s)

                                                
                                    
x
+
TestSkaffold (11.89s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1902364229 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-697000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-697000 --memory=2600 --driver=qemu2 : exit status 80 (9.724903667s)

                                                
                                                
-- stdout --
	* [skaffold-697000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-697000 in cluster skaffold-697000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-697000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-697000 in cluster skaffold-697000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-09-06 16:49:22.773429 -0700 PDT m=+754.046235418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-697000 -n skaffold-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-697000 -n skaffold-697000: exit status 7 (62.224375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-697000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-697000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-697000
--- FAIL: TestSkaffold (11.89s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (146.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0906 16:50:07.566311    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
E0906 16:50:54.685347    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:51:22.392249    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:51:29.487039    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-06 16:52:28.733716 -0700 PDT m=+940.010319376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-228000 -n running-upgrade-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-228000 -n running-upgrade-228000: exit status 85 (85.90575ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-228000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-228000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-228000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-228000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-228000\"")
helpers_test.go:175: Cleaning up "running-upgrade-228000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-228000
--- FAIL: TestRunningBinaryUpgrade (146.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-775000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-775000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.761563083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-775000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-775000 in cluster kubernetes-upgrade-775000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:52:29.083605    3137 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:52:29.083741    3137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:52:29.083744    3137 out.go:309] Setting ErrFile to fd 2...
	I0906 16:52:29.083747    3137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:52:29.083885    3137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:52:29.084875    3137 out.go:303] Setting JSON to false
	I0906 16:52:29.100306    3137 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1323,"bootTime":1694043026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:52:29.100370    3137 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:52:29.105062    3137 out.go:177] * [kubernetes-upgrade-775000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:52:29.112037    3137 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:52:29.115004    3137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:52:29.112100    3137 notify.go:220] Checking for updates...
	I0906 16:52:29.120925    3137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:52:29.124014    3137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:52:29.126999    3137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:52:29.130030    3137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:52:29.133336    3137 config.go:182] Loaded profile config "cert-expiration-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:52:29.133404    3137 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:52:29.133449    3137 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:52:29.137973    3137 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:52:29.144943    3137 start.go:298] selected driver: qemu2
	I0906 16:52:29.144948    3137 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:52:29.144953    3137 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:52:29.146939    3137 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:52:29.149985    3137 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:52:29.153040    3137 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 16:52:29.153064    3137 cni.go:84] Creating CNI manager for ""
	I0906 16:52:29.153073    3137 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:52:29.153077    3137 start_flags.go:321] config:
	{Name:kubernetes-upgrade-775000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:52:29.157410    3137 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:52:29.164941    3137 out.go:177] * Starting control plane node kubernetes-upgrade-775000 in cluster kubernetes-upgrade-775000
	I0906 16:52:29.168968    3137 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:52:29.169001    3137 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 16:52:29.169022    3137 cache.go:57] Caching tarball of preloaded images
	I0906 16:52:29.169103    3137 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:52:29.169109    3137 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 16:52:29.169183    3137 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/kubernetes-upgrade-775000/config.json ...
	I0906 16:52:29.169196    3137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/kubernetes-upgrade-775000/config.json: {Name:mk89b63460d8fda3f4382a98050b2dc18c755945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:52:29.169404    3137 start.go:365] acquiring machines lock for kubernetes-upgrade-775000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:52:29.169433    3137 start.go:369] acquired machines lock for "kubernetes-upgrade-775000" in 23.708µs
	I0906 16:52:29.169447    3137 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:52:29.169480    3137 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:52:29.177980    3137 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:52:29.194375    3137 start.go:159] libmachine.API.Create for "kubernetes-upgrade-775000" (driver="qemu2")
	I0906 16:52:29.194402    3137 client.go:168] LocalClient.Create starting
	I0906 16:52:29.194459    3137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:52:29.194484    3137 main.go:141] libmachine: Decoding PEM data...
	I0906 16:52:29.194497    3137 main.go:141] libmachine: Parsing certificate...
	I0906 16:52:29.194541    3137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:52:29.194560    3137 main.go:141] libmachine: Decoding PEM data...
	I0906 16:52:29.194571    3137 main.go:141] libmachine: Parsing certificate...
	I0906 16:52:29.194910    3137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:52:29.312207    3137 main.go:141] libmachine: Creating SSH key...
	I0906 16:52:29.415262    3137 main.go:141] libmachine: Creating Disk image...
	I0906 16:52:29.415267    3137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:52:29.415401    3137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:29.424159    3137 main.go:141] libmachine: STDOUT: 
	I0906 16:52:29.424175    3137 main.go:141] libmachine: STDERR: 
	I0906 16:52:29.424232    3137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2 +20000M
	I0906 16:52:29.431379    3137 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:52:29.431391    3137 main.go:141] libmachine: STDERR: 
	I0906 16:52:29.431403    3137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:29.431408    3137 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:52:29.431450    3137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:18:ac:9d:29:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:29.433001    3137 main.go:141] libmachine: STDOUT: 
	I0906 16:52:29.433013    3137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:52:29.433029    3137 client.go:171] LocalClient.Create took 238.624958ms
	I0906 16:52:31.435368    3137 start.go:128] duration metric: createHost completed in 2.265919833s
	I0906 16:52:31.435410    3137 start.go:83] releasing machines lock for "kubernetes-upgrade-775000", held for 2.26601025s
	W0906 16:52:31.435453    3137 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:52:31.442882    3137 out.go:177] * Deleting "kubernetes-upgrade-775000" in qemu2 ...
	W0906 16:52:31.465551    3137 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:52:31.465583    3137 start.go:687] Will try again in 5 seconds ...
	I0906 16:52:36.467724    3137 start.go:365] acquiring machines lock for kubernetes-upgrade-775000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:52:36.468195    3137 start.go:369] acquired machines lock for "kubernetes-upgrade-775000" in 361.875µs
	I0906 16:52:36.468306    3137 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:52:36.468601    3137 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:52:36.478263    3137 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:52:36.524198    3137 start.go:159] libmachine.API.Create for "kubernetes-upgrade-775000" (driver="qemu2")
	I0906 16:52:36.524247    3137 client.go:168] LocalClient.Create starting
	I0906 16:52:36.524378    3137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:52:36.524442    3137 main.go:141] libmachine: Decoding PEM data...
	I0906 16:52:36.524474    3137 main.go:141] libmachine: Parsing certificate...
	I0906 16:52:36.524570    3137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:52:36.524612    3137 main.go:141] libmachine: Decoding PEM data...
	I0906 16:52:36.524631    3137 main.go:141] libmachine: Parsing certificate...
	I0906 16:52:36.525219    3137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:52:36.652991    3137 main.go:141] libmachine: Creating SSH key...
	I0906 16:52:36.758535    3137 main.go:141] libmachine: Creating Disk image...
	I0906 16:52:36.758540    3137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:52:36.758700    3137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:36.767495    3137 main.go:141] libmachine: STDOUT: 
	I0906 16:52:36.767511    3137 main.go:141] libmachine: STDERR: 
	I0906 16:52:36.767597    3137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2 +20000M
	I0906 16:52:36.774773    3137 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:52:36.774785    3137 main.go:141] libmachine: STDERR: 
	I0906 16:52:36.774799    3137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:36.774803    3137 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:52:36.774850    3137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:e0:24:cf:d5:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:36.776420    3137 main.go:141] libmachine: STDOUT: 
	I0906 16:52:36.776433    3137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:52:36.776447    3137 client.go:171] LocalClient.Create took 252.200208ms
	I0906 16:52:38.778619    3137 start.go:128] duration metric: createHost completed in 2.310027708s
	I0906 16:52:38.778684    3137 start.go:83] releasing machines lock for "kubernetes-upgrade-775000", held for 2.310513708s
	W0906 16:52:38.779212    3137 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:52:38.787853    3137 out.go:177] 
	W0906 16:52:38.792887    3137 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:52:38.792937    3137 out.go:239] * 
	* 
	W0906 16:52:38.795571    3137 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:52:38.803784    3137 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-775000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-775000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-775000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-775000 status --format={{.Host}}: exit status 7 (36.363042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-775000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-775000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.177981041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-775000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-775000 in cluster kubernetes-upgrade-775000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:52:38.984187    3155 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:52:38.984313    3155 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:52:38.984316    3155 out.go:309] Setting ErrFile to fd 2...
	I0906 16:52:38.984318    3155 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:52:38.984428    3155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:52:38.985380    3155 out.go:303] Setting JSON to false
	I0906 16:52:39.000424    3155 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1332,"bootTime":1694043026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:52:39.000484    3155 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:52:39.004959    3155 out.go:177] * [kubernetes-upgrade-775000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:52:39.011820    3155 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:52:39.015857    3155 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:52:39.011882    3155 notify.go:220] Checking for updates...
	I0906 16:52:39.019896    3155 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:52:39.022844    3155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:52:39.026816    3155 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:52:39.029792    3155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:52:39.033176    3155 config.go:182] Loaded profile config "kubernetes-upgrade-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 16:52:39.033437    3155 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:52:39.036858    3155 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:52:39.043816    3155 start.go:298] selected driver: qemu2
	I0906 16:52:39.043821    3155 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:52:39.043897    3155 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:52:39.045900    3155 cni.go:84] Creating CNI manager for ""
	I0906 16:52:39.045913    3155 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:52:39.045917    3155 start_flags.go:321] config:
	{Name:kubernetes-upgrade-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-775000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:52:39.049877    3155 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:52:39.057892    3155 out.go:177] * Starting control plane node kubernetes-upgrade-775000 in cluster kubernetes-upgrade-775000
	I0906 16:52:39.060822    3155 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:52:39.060837    3155 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:52:39.060850    3155 cache.go:57] Caching tarball of preloaded images
	I0906 16:52:39.060900    3155 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:52:39.060905    3155 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:52:39.060955    3155 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/kubernetes-upgrade-775000/config.json ...
	I0906 16:52:39.061190    3155 start.go:365] acquiring machines lock for kubernetes-upgrade-775000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:52:39.061218    3155 start.go:369] acquired machines lock for "kubernetes-upgrade-775000" in 21.958µs
	I0906 16:52:39.061228    3155 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:52:39.061231    3155 fix.go:54] fixHost starting: 
	I0906 16:52:39.061353    3155 fix.go:102] recreateIfNeeded on kubernetes-upgrade-775000: state=Stopped err=<nil>
	W0906 16:52:39.061361    3155 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:52:39.069840    3155 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-775000" ...
	I0906 16:52:39.073805    3155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:e0:24:cf:d5:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:39.075751    3155 main.go:141] libmachine: STDOUT: 
	I0906 16:52:39.075770    3155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:52:39.075797    3155 fix.go:56] fixHost completed within 14.564125ms
	I0906 16:52:39.075802    3155 start.go:83] releasing machines lock for "kubernetes-upgrade-775000", held for 14.579792ms
	W0906 16:52:39.075811    3155 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:52:39.075854    3155 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:52:39.075859    3155 start.go:687] Will try again in 5 seconds ...
	I0906 16:52:44.077876    3155 start.go:365] acquiring machines lock for kubernetes-upgrade-775000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:52:44.078176    3155 start.go:369] acquired machines lock for "kubernetes-upgrade-775000" in 244.666µs
	I0906 16:52:44.078294    3155 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:52:44.078313    3155 fix.go:54] fixHost starting: 
	I0906 16:52:44.079005    3155 fix.go:102] recreateIfNeeded on kubernetes-upgrade-775000: state=Stopped err=<nil>
	W0906 16:52:44.079034    3155 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:52:44.084391    3155 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-775000" ...
	I0906 16:52:44.090504    3155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:e0:24:cf:d5:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubernetes-upgrade-775000/disk.qcow2
	I0906 16:52:44.098880    3155 main.go:141] libmachine: STDOUT: 
	I0906 16:52:44.098941    3155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:52:44.099015    3155 fix.go:56] fixHost completed within 20.696625ms
	I0906 16:52:44.099037    3155 start.go:83] releasing machines lock for "kubernetes-upgrade-775000", held for 20.841792ms
	W0906 16:52:44.099198    3155 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:52:44.105386    3155 out.go:177] 
	W0906 16:52:44.109428    3155 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:52:44.109451    3155 out.go:239] * 
	* 
	W0906 16:52:44.112175    3155 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:52:44.123407    3155 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-775000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-775000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-775000 version --output=json: exit status 1 (64.577583ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-775000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-09-06 16:52:44.201617 -0700 PDT m=+955.478536043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-775000 -n kubernetes-upgrade-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-775000 -n kubernetes-upgrade-775000: exit status 7 (33.30475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-775000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-775000
--- FAIL: TestKubernetesUpgrade (15.28s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.45s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E0906 16:49:26.604735    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17174
- KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2796912179/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17174
- KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1780516917/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (142.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (142.44s)

                                                
                                    
x
+
TestPause/serial/Start (9.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-624000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-624000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.694946375s)

                                                
                                                
-- stdout --
	* [pause-624000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-624000 in cluster pause-624000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-624000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-624000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-624000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-624000 -n pause-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-624000 -n pause-624000: exit status 7 (69.530667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-624000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-447000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-447000 --driver=qemu2 : exit status 80 (9.658451166s)

                                                
                                                
-- stdout --
	* [NoKubernetes-447000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-447000 in cluster NoKubernetes-447000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-447000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-447000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-447000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000: exit status 7 (69.813959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-447000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247183917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-447000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-447000
	* Restarting existing qemu2 VM for "NoKubernetes-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-447000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000: exit status 7 (69.073792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-447000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247994584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-447000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-447000
	* Restarting existing qemu2 VM for "NoKubernetes-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-447000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000: exit status 7 (67.757916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-447000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-447000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-447000 --driver=qemu2 : exit status 80 (5.234533958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-447000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-447000
	* Restarting existing qemu2 VM for "NoKubernetes-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-447000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-447000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-447000 -n NoKubernetes-447000: exit status 7 (73.143291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-447000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0906 16:53:45.539170    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/ingress-addon-legacy-208000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.784063333s)

                                                
                                                
-- stdout --
	* [kindnet-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-967000 in cluster kindnet-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:53:39.112667    3277 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:53:39.112781    3277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:53:39.112783    3277 out.go:309] Setting ErrFile to fd 2...
	I0906 16:53:39.112786    3277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:53:39.112885    3277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:53:39.113858    3277 out.go:303] Setting JSON to false
	I0906 16:53:39.129023    3277 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1393,"bootTime":1694043026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:53:39.129088    3277 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:53:39.133724    3277 out.go:177] * [kindnet-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:53:39.141793    3277 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:53:39.144700    3277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:53:39.141854    3277 notify.go:220] Checking for updates...
	I0906 16:53:39.150736    3277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:53:39.153647    3277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:53:39.156776    3277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:53:39.159771    3277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:53:39.163024    3277 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:53:39.163071    3277 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:53:39.167693    3277 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:53:39.173603    3277 start.go:298] selected driver: qemu2
	I0906 16:53:39.173610    3277 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:53:39.173619    3277 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:53:39.175551    3277 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:53:39.178730    3277 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:53:39.181815    3277 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:53:39.181835    3277 cni.go:84] Creating CNI manager for "kindnet"
	I0906 16:53:39.181840    3277 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 16:53:39.181845    3277 start_flags.go:321] config:
	{Name:kindnet-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:53:39.186036    3277 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:53:39.194758    3277 out.go:177] * Starting control plane node kindnet-967000 in cluster kindnet-967000
	I0906 16:53:39.198732    3277 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:53:39.198751    3277 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:53:39.198766    3277 cache.go:57] Caching tarball of preloaded images
	I0906 16:53:39.198819    3277 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:53:39.198824    3277 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:53:39.198885    3277 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/kindnet-967000/config.json ...
	I0906 16:53:39.198897    3277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/kindnet-967000/config.json: {Name:mk998dca3f5043857bc29c42e022f85d71a04c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:53:39.199114    3277 start.go:365] acquiring machines lock for kindnet-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:53:39.199143    3277 start.go:369] acquired machines lock for "kindnet-967000" in 24.042µs
	I0906 16:53:39.199158    3277 start.go:93] Provisioning new machine with config: &{Name:kindnet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:53:39.199191    3277 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:53:39.207748    3277 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:53:39.224915    3277 start.go:159] libmachine.API.Create for "kindnet-967000" (driver="qemu2")
	I0906 16:53:39.224937    3277 client.go:168] LocalClient.Create starting
	I0906 16:53:39.225012    3277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:53:39.225042    3277 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:39.225055    3277 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:39.225088    3277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:53:39.225110    3277 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:39.225124    3277 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:39.225464    3277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:53:39.345627    3277 main.go:141] libmachine: Creating SSH key...
	I0906 16:53:39.443977    3277 main.go:141] libmachine: Creating Disk image...
	I0906 16:53:39.443983    3277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:53:39.444106    3277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2
	I0906 16:53:39.452880    3277 main.go:141] libmachine: STDOUT: 
	I0906 16:53:39.452899    3277 main.go:141] libmachine: STDERR: 
	I0906 16:53:39.452949    3277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2 +20000M
	I0906 16:53:39.460350    3277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:53:39.460361    3277 main.go:141] libmachine: STDERR: 
	I0906 16:53:39.460389    3277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2
	I0906 16:53:39.460396    3277 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:53:39.460434    3277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:96:ed:2d:87:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2
	I0906 16:53:39.461906    3277 main.go:141] libmachine: STDOUT: 
	I0906 16:53:39.461918    3277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:53:39.461940    3277 client.go:171] LocalClient.Create took 237.006791ms
	I0906 16:53:41.464025    3277 start.go:128] duration metric: createHost completed in 2.264910833s
	I0906 16:53:41.464114    3277 start.go:83] releasing machines lock for "kindnet-967000", held for 2.265029167s
	W0906 16:53:41.464177    3277 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:53:41.472495    3277 out.go:177] * Deleting "kindnet-967000" in qemu2 ...
	W0906 16:53:41.494496    3277 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:53:41.494529    3277 start.go:687] Will try again in 5 seconds ...
	I0906 16:53:46.496569    3277 start.go:365] acquiring machines lock for kindnet-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:53:46.496999    3277 start.go:369] acquired machines lock for "kindnet-967000" in 351µs
	I0906 16:53:46.497136    3277 start.go:93] Provisioning new machine with config: &{Name:kindnet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:53:46.497499    3277 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:53:46.507147    3277 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:53:46.555789    3277 start.go:159] libmachine.API.Create for "kindnet-967000" (driver="qemu2")
	I0906 16:53:46.555826    3277 client.go:168] LocalClient.Create starting
	I0906 16:53:46.555936    3277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:53:46.555990    3277 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:46.556009    3277 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:46.556096    3277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:53:46.556138    3277 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:46.556155    3277 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:46.556701    3277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:53:46.683549    3277 main.go:141] libmachine: Creating SSH key...
	I0906 16:53:46.811741    3277 main.go:141] libmachine: Creating Disk image...
	I0906 16:53:46.811746    3277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:53:46.811886    3277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2
	I0906 16:53:46.820491    3277 main.go:141] libmachine: STDOUT: 
	I0906 16:53:46.820518    3277 main.go:141] libmachine: STDERR: 
	I0906 16:53:46.820583    3277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2 +20000M
	I0906 16:53:46.827670    3277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:53:46.827682    3277 main.go:141] libmachine: STDERR: 
	I0906 16:53:46.827699    3277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2
	I0906 16:53:46.827704    3277 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:53:46.827748    3277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:e4:f4:2a:fc:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kindnet-967000/disk.qcow2
	I0906 16:53:46.829251    3277 main.go:141] libmachine: STDOUT: 
	I0906 16:53:46.829265    3277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:53:46.829282    3277 client.go:171] LocalClient.Create took 273.462084ms
	I0906 16:53:48.831291    3277 start.go:128] duration metric: createHost completed in 2.333868709s
	I0906 16:53:48.831329    3277 start.go:83] releasing machines lock for "kindnet-967000", held for 2.334404125s
	W0906 16:53:48.831595    3277 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:53:48.839956    3277 out.go:177] 
	W0906 16:53:48.844037    3277 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:53:48.844059    3277 out.go:239] * 
	* 
	W0906 16:53:48.846435    3277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:53:48.855054    3277 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.729677084s)

                                                
                                                
-- stdout --
	* [auto-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-967000 in cluster auto-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:53:51.097241    3391 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:53:51.097369    3391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:53:51.097372    3391 out.go:309] Setting ErrFile to fd 2...
	I0906 16:53:51.097374    3391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:53:51.097482    3391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:53:51.098519    3391 out.go:303] Setting JSON to false
	I0906 16:53:51.113570    3391 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1405,"bootTime":1694043026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:53:51.113643    3391 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:53:51.117477    3391 out.go:177] * [auto-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:53:51.125414    3391 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:53:51.129459    3391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:53:51.125475    3391 notify.go:220] Checking for updates...
	I0906 16:53:51.133409    3391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:53:51.136442    3391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:53:51.139386    3391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:53:51.142464    3391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:53:51.145698    3391 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:53:51.145742    3391 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:53:51.149431    3391 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:53:51.156292    3391 start.go:298] selected driver: qemu2
	I0906 16:53:51.156298    3391 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:53:51.156304    3391 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:53:51.158200    3391 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:53:51.162401    3391 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:53:51.165526    3391 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:53:51.165543    3391 cni.go:84] Creating CNI manager for ""
	I0906 16:53:51.165550    3391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:53:51.165554    3391 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:53:51.165558    3391 start_flags.go:321] config:
	{Name:auto-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0906 16:53:51.169473    3391 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:53:51.177397    3391 out.go:177] * Starting control plane node auto-967000 in cluster auto-967000
	I0906 16:53:51.180342    3391 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:53:51.180362    3391 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:53:51.180374    3391 cache.go:57] Caching tarball of preloaded images
	I0906 16:53:51.180430    3391 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:53:51.180436    3391 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:53:51.180495    3391 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/auto-967000/config.json ...
	I0906 16:53:51.180507    3391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/auto-967000/config.json: {Name:mk0b760126216438a9442cd0615185207b003cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:53:51.180708    3391 start.go:365] acquiring machines lock for auto-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:53:51.180738    3391 start.go:369] acquired machines lock for "auto-967000" in 24.333µs
	I0906 16:53:51.180752    3391 start.go:93] Provisioning new machine with config: &{Name:auto-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:53:51.180791    3391 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:53:51.189354    3391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:53:51.205844    3391 start.go:159] libmachine.API.Create for "auto-967000" (driver="qemu2")
	I0906 16:53:51.205873    3391 client.go:168] LocalClient.Create starting
	I0906 16:53:51.205932    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:53:51.205966    3391 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:51.205981    3391 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:51.206022    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:53:51.206041    3391 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:51.206051    3391 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:51.206387    3391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:53:51.320334    3391 main.go:141] libmachine: Creating SSH key...
	I0906 16:53:51.446812    3391 main.go:141] libmachine: Creating Disk image...
	I0906 16:53:51.446818    3391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:53:51.446947    3391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2
	I0906 16:53:51.455465    3391 main.go:141] libmachine: STDOUT: 
	I0906 16:53:51.455479    3391 main.go:141] libmachine: STDERR: 
	I0906 16:53:51.455543    3391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2 +20000M
	I0906 16:53:51.462892    3391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:53:51.462906    3391 main.go:141] libmachine: STDERR: 
	I0906 16:53:51.462927    3391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2
	I0906 16:53:51.462944    3391 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:53:51.462974    3391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:77:76:c6:dd:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2
	I0906 16:53:51.464516    3391 main.go:141] libmachine: STDOUT: 
	I0906 16:53:51.464527    3391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:53:51.464544    3391 client.go:171] LocalClient.Create took 258.676625ms
	I0906 16:53:53.466640    3391 start.go:128] duration metric: createHost completed in 2.285926958s
	I0906 16:53:53.466690    3391 start.go:83] releasing machines lock for "auto-967000", held for 2.286038s
	W0906 16:53:53.466742    3391 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:53:53.474069    3391 out.go:177] * Deleting "auto-967000" in qemu2 ...
	W0906 16:53:53.495924    3391 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:53:53.495959    3391 start.go:687] Will try again in 5 seconds ...
	I0906 16:53:58.497935    3391 start.go:365] acquiring machines lock for auto-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:53:58.498335    3391 start.go:369] acquired machines lock for "auto-967000" in 306.417µs
	I0906 16:53:58.498432    3391 start.go:93] Provisioning new machine with config: &{Name:auto-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:53:58.498767    3391 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:53:58.504414    3391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:53:58.550508    3391 start.go:159] libmachine.API.Create for "auto-967000" (driver="qemu2")
	I0906 16:53:58.550553    3391 client.go:168] LocalClient.Create starting
	I0906 16:53:58.550684    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:53:58.550743    3391 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:58.550759    3391 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:58.550836    3391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:53:58.550903    3391 main.go:141] libmachine: Decoding PEM data...
	I0906 16:53:58.550918    3391 main.go:141] libmachine: Parsing certificate...
	I0906 16:53:58.551446    3391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:53:58.677299    3391 main.go:141] libmachine: Creating SSH key...
	I0906 16:53:58.739487    3391 main.go:141] libmachine: Creating Disk image...
	I0906 16:53:58.739492    3391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:53:58.739631    3391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2
	I0906 16:53:58.748033    3391 main.go:141] libmachine: STDOUT: 
	I0906 16:53:58.748047    3391 main.go:141] libmachine: STDERR: 
	I0906 16:53:58.748113    3391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2 +20000M
	I0906 16:53:58.755384    3391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:53:58.755395    3391 main.go:141] libmachine: STDERR: 
	I0906 16:53:58.755405    3391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2
	I0906 16:53:58.755411    3391 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:53:58.755440    3391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c4:68:9e:32:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/auto-967000/disk.qcow2
	I0906 16:53:58.756951    3391 main.go:141] libmachine: STDOUT: 
	I0906 16:53:58.756965    3391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:53:58.756979    3391 client.go:171] LocalClient.Create took 206.430125ms
	I0906 16:54:00.759100    3391 start.go:128] duration metric: createHost completed in 2.2603885s
	I0906 16:54:00.759196    3391 start.go:83] releasing machines lock for "auto-967000", held for 2.260927334s
	W0906 16:54:00.759686    3391 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:00.768361    3391 out.go:177] 
	W0906 16:54:00.773317    3391 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:54:00.773350    3391 out.go:239] * 
	* 
	W0906 16:54:00.775932    3391 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:54:00.785340    3391 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.75604425s)

                                                
                                                
-- stdout --
	* [flannel-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-967000 in cluster flannel-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:54:02.902606    3501 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:54:02.902716    3501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:02.902719    3501 out.go:309] Setting ErrFile to fd 2...
	I0906 16:54:02.902722    3501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:02.902845    3501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:54:02.903893    3501 out.go:303] Setting JSON to false
	I0906 16:54:02.918965    3501 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1416,"bootTime":1694043026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:54:02.919037    3501 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:54:02.923343    3501 out.go:177] * [flannel-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:54:02.934344    3501 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:54:02.934402    3501 notify.go:220] Checking for updates...
	I0906 16:54:02.940325    3501 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:54:02.943267    3501 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:54:02.946359    3501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:54:02.949360    3501 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:54:02.950681    3501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:54:02.953584    3501 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:54:02.953627    3501 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:54:02.957297    3501 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:54:02.962289    3501 start.go:298] selected driver: qemu2
	I0906 16:54:02.962296    3501 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:54:02.962303    3501 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:54:02.964355    3501 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:54:02.967279    3501 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:54:02.970518    3501 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:54:02.970545    3501 cni.go:84] Creating CNI manager for "flannel"
	I0906 16:54:02.970550    3501 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0906 16:54:02.970555    3501 start_flags.go:321] config:
	{Name:flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:flannel-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:54:02.974624    3501 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:54:02.981309    3501 out.go:177] * Starting control plane node flannel-967000 in cluster flannel-967000
	I0906 16:54:02.985235    3501 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:54:02.985256    3501 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:54:02.985267    3501 cache.go:57] Caching tarball of preloaded images
	I0906 16:54:02.985321    3501 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:54:02.985326    3501 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:54:02.985383    3501 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/flannel-967000/config.json ...
	I0906 16:54:02.985395    3501 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/flannel-967000/config.json: {Name:mk625b0fb3718f44ae847fff94e9f161d10b72aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:54:02.985588    3501 start.go:365] acquiring machines lock for flannel-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:02.985617    3501 start.go:369] acquired machines lock for "flannel-967000" in 23.583µs
	I0906 16:54:02.985630    3501 start.go:93] Provisioning new machine with config: &{Name:flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:02.985655    3501 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:02.993247    3501 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:03.008151    3501 start.go:159] libmachine.API.Create for "flannel-967000" (driver="qemu2")
	I0906 16:54:03.008173    3501 client.go:168] LocalClient.Create starting
	I0906 16:54:03.008227    3501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:03.008251    3501 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:03.008264    3501 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:03.008299    3501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:03.008316    3501 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:03.008322    3501 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:03.008608    3501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:03.120358    3501 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:03.260019    3501 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:03.260027    3501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:03.260163    3501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2
	I0906 16:54:03.268797    3501 main.go:141] libmachine: STDOUT: 
	I0906 16:54:03.268811    3501 main.go:141] libmachine: STDERR: 
	I0906 16:54:03.268863    3501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2 +20000M
	I0906 16:54:03.275989    3501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:03.276018    3501 main.go:141] libmachine: STDERR: 
	I0906 16:54:03.276037    3501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2
	I0906 16:54:03.276047    3501 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:03.276089    3501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:41:d0:06:a1:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2
	I0906 16:54:03.277644    3501 main.go:141] libmachine: STDOUT: 
	I0906 16:54:03.277657    3501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:03.277676    3501 client.go:171] LocalClient.Create took 269.510375ms
	I0906 16:54:05.279767    3501 start.go:128] duration metric: createHost completed in 2.294189666s
	I0906 16:54:05.279829    3501 start.go:83] releasing machines lock for "flannel-967000", held for 2.294297958s
	W0906 16:54:05.279924    3501 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:05.288207    3501 out.go:177] * Deleting "flannel-967000" in qemu2 ...
	W0906 16:54:05.311581    3501 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:05.311607    3501 start.go:687] Will try again in 5 seconds ...
	I0906 16:54:10.313632    3501 start.go:365] acquiring machines lock for flannel-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:10.314141    3501 start.go:369] acquired machines lock for "flannel-967000" in 405.334µs
	I0906 16:54:10.314265    3501 start.go:93] Provisioning new machine with config: &{Name:flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:10.314526    3501 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:10.324254    3501 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:10.371290    3501 start.go:159] libmachine.API.Create for "flannel-967000" (driver="qemu2")
	I0906 16:54:10.371331    3501 client.go:168] LocalClient.Create starting
	I0906 16:54:10.371474    3501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:10.371537    3501 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:10.371558    3501 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:10.371670    3501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:10.371731    3501 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:10.371748    3501 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:10.372384    3501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:10.500184    3501 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:10.571545    3501 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:10.571550    3501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:10.571694    3501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2
	I0906 16:54:10.580310    3501 main.go:141] libmachine: STDOUT: 
	I0906 16:54:10.580324    3501 main.go:141] libmachine: STDERR: 
	I0906 16:54:10.580380    3501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2 +20000M
	I0906 16:54:10.587814    3501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:10.587826    3501 main.go:141] libmachine: STDERR: 
	I0906 16:54:10.587839    3501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2
	I0906 16:54:10.587847    3501 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:10.587895    3501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e4:7d:bc:a3:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/flannel-967000/disk.qcow2
	I0906 16:54:10.589434    3501 main.go:141] libmachine: STDOUT: 
	I0906 16:54:10.589450    3501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:10.589461    3501 client.go:171] LocalClient.Create took 218.134291ms
	I0906 16:54:12.591550    3501 start.go:128] duration metric: createHost completed in 2.277089333s
	I0906 16:54:12.591652    3501 start.go:83] releasing machines lock for "flannel-967000", held for 2.277547833s
	W0906 16:54:12.592086    3501 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:12.601754    3501 out.go:177] 
	W0906 16:54:12.605911    3501 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:54:12.605949    3501 out.go:239] * 
	* 
	W0906 16:54:12.608429    3501 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:54:12.616791    3501 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.731254083s)

                                                
                                                
-- stdout --
	* [enable-default-cni-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-967000 in cluster enable-default-cni-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:54:14.975573    3622 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:54:14.975691    3622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:14.975695    3622 out.go:309] Setting ErrFile to fd 2...
	I0906 16:54:14.975698    3622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:14.975821    3622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:54:14.976794    3622 out.go:303] Setting JSON to false
	I0906 16:54:14.991931    3622 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1428,"bootTime":1694043026,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:54:14.992010    3622 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:54:14.997516    3622 out.go:177] * [enable-default-cni-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:54:15.005481    3622 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:54:15.009459    3622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:54:15.005547    3622 notify.go:220] Checking for updates...
	I0906 16:54:15.015521    3622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:54:15.018478    3622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:54:15.021471    3622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:54:15.024525    3622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:54:15.027805    3622 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:54:15.027851    3622 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:54:15.032439    3622 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:54:15.039458    3622 start.go:298] selected driver: qemu2
	I0906 16:54:15.039463    3622 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:54:15.039468    3622 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:54:15.041444    3622 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:54:15.045485    3622 out.go:177] * Automatically selected the socket_vmnet network
	E0906 16:54:15.048597    3622 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0906 16:54:15.048611    3622 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:54:15.048647    3622 cni.go:84] Creating CNI manager for "bridge"
	I0906 16:54:15.048652    3622 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:54:15.048658    3622 start_flags.go:321] config:
	{Name:enable-default-cni-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:54:15.053123    3622 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:54:15.061466    3622 out.go:177] * Starting control plane node enable-default-cni-967000 in cluster enable-default-cni-967000
	I0906 16:54:15.065432    3622 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:54:15.065452    3622 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:54:15.065472    3622 cache.go:57] Caching tarball of preloaded images
	I0906 16:54:15.065532    3622 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:54:15.065539    3622 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:54:15.065620    3622 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/enable-default-cni-967000/config.json ...
	I0906 16:54:15.065639    3622 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/enable-default-cni-967000/config.json: {Name:mk1f9610a6bd2e4ec4d74a62c18b11b81a3de907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:54:15.065848    3622 start.go:365] acquiring machines lock for enable-default-cni-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:15.065881    3622 start.go:369] acquired machines lock for "enable-default-cni-967000" in 25.583µs
	I0906 16:54:15.065896    3622 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:15.065924    3622 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:15.074442    3622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:15.090815    3622 start.go:159] libmachine.API.Create for "enable-default-cni-967000" (driver="qemu2")
	I0906 16:54:15.090855    3622 client.go:168] LocalClient.Create starting
	I0906 16:54:15.090916    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:15.090945    3622 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:15.090956    3622 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:15.090999    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:15.091020    3622 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:15.091029    3622 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:15.091374    3622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:15.210160    3622 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:15.299461    3622 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:15.299468    3622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:15.299612    3622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0906 16:54:15.308229    3622 main.go:141] libmachine: STDOUT: 
	I0906 16:54:15.308242    3622 main.go:141] libmachine: STDERR: 
	I0906 16:54:15.308293    3622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2 +20000M
	I0906 16:54:15.315434    3622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:15.315446    3622 main.go:141] libmachine: STDERR: 
	I0906 16:54:15.315461    3622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0906 16:54:15.315467    3622 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:15.315501    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9a:5b:77:00:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0906 16:54:15.316999    3622 main.go:141] libmachine: STDOUT: 
	I0906 16:54:15.317014    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:15.317034    3622 client.go:171] LocalClient.Create took 226.18025ms
	I0906 16:54:17.319108    3622 start.go:128] duration metric: createHost completed in 2.253257042s
	I0906 16:54:17.319176    3622 start.go:83] releasing machines lock for "enable-default-cni-967000", held for 2.253377916s
	W0906 16:54:17.319261    3622 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:17.326566    3622 out.go:177] * Deleting "enable-default-cni-967000" in qemu2 ...
	W0906 16:54:17.346830    3622 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:17.346857    3622 start.go:687] Will try again in 5 seconds ...
	I0906 16:54:22.348960    3622 start.go:365] acquiring machines lock for enable-default-cni-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:22.349561    3622 start.go:369] acquired machines lock for "enable-default-cni-967000" in 475.333µs
	I0906 16:54:22.349690    3622 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:22.350005    3622 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:22.354698    3622 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:22.401299    3622 start.go:159] libmachine.API.Create for "enable-default-cni-967000" (driver="qemu2")
	I0906 16:54:22.401339    3622 client.go:168] LocalClient.Create starting
	I0906 16:54:22.401503    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:22.401580    3622 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:22.401595    3622 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:22.401666    3622 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:22.401703    3622 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:22.401738    3622 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:22.402265    3622 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:22.527418    3622 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:22.620535    3622 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:22.620541    3622 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:22.620675    3622 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0906 16:54:22.629147    3622 main.go:141] libmachine: STDOUT: 
	I0906 16:54:22.629164    3622 main.go:141] libmachine: STDERR: 
	I0906 16:54:22.629227    3622 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2 +20000M
	I0906 16:54:22.636392    3622 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:22.636407    3622 main.go:141] libmachine: STDERR: 
	I0906 16:54:22.636427    3622 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0906 16:54:22.636434    3622 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:22.636483    3622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:58:bc:51:5c:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/enable-default-cni-967000/disk.qcow2
	I0906 16:54:22.637977    3622 main.go:141] libmachine: STDOUT: 
	I0906 16:54:22.637993    3622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:22.638005    3622 client.go:171] LocalClient.Create took 236.670625ms
	I0906 16:54:24.640092    3622 start.go:128] duration metric: createHost completed in 2.290140708s
	I0906 16:54:24.640158    3622 start.go:83] releasing machines lock for "enable-default-cni-967000", held for 2.290659959s
	W0906 16:54:24.640630    3622 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:24.649285    3622 out.go:177] 
	W0906 16:54:24.653365    3622 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:54:24.653417    3622 out.go:239] * 
	* 
	W0906 16:54:24.656062    3622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:54:24.665314    3622 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.7007785s)

                                                
                                                
-- stdout --
	* [bridge-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-967000 in cluster bridge-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:54:26.839476    3732 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:54:26.839605    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:26.839610    3732 out.go:309] Setting ErrFile to fd 2...
	I0906 16:54:26.839612    3732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:26.839727    3732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:54:26.840751    3732 out.go:303] Setting JSON to false
	I0906 16:54:26.855996    3732 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1440,"bootTime":1694043026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:54:26.856051    3732 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:54:26.860440    3732 out.go:177] * [bridge-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:54:26.868312    3732 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:54:26.872300    3732 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:54:26.868331    3732 notify.go:220] Checking for updates...
	I0906 16:54:26.878296    3732 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:54:26.881358    3732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:54:26.882700    3732 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:54:26.885263    3732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:54:26.888684    3732 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:54:26.888725    3732 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:54:26.893158    3732 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:54:26.900268    3732 start.go:298] selected driver: qemu2
	I0906 16:54:26.900281    3732 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:54:26.900288    3732 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:54:26.902189    3732 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:54:26.905308    3732 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:54:26.908448    3732 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:54:26.908474    3732 cni.go:84] Creating CNI manager for "bridge"
	I0906 16:54:26.908479    3732 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:54:26.908484    3732 start_flags.go:321] config:
	{Name:bridge-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0906 16:54:26.912900    3732 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:54:26.919262    3732 out.go:177] * Starting control plane node bridge-967000 in cluster bridge-967000
	I0906 16:54:26.923286    3732 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:54:26.923311    3732 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:54:26.923325    3732 cache.go:57] Caching tarball of preloaded images
	I0906 16:54:26.923424    3732 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:54:26.923435    3732 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:54:26.923517    3732 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/bridge-967000/config.json ...
	I0906 16:54:26.923530    3732 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/bridge-967000/config.json: {Name:mkcdaa4d4a24db19107ac22543a5acb395d2fd2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:54:26.923742    3732 start.go:365] acquiring machines lock for bridge-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:26.923772    3732 start.go:369] acquired machines lock for "bridge-967000" in 24.5µs
	I0906 16:54:26.923783    3732 start.go:93] Provisioning new machine with config: &{Name:bridge-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:26.923813    3732 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:26.931260    3732 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:26.947218    3732 start.go:159] libmachine.API.Create for "bridge-967000" (driver="qemu2")
	I0906 16:54:26.947240    3732 client.go:168] LocalClient.Create starting
	I0906 16:54:26.947302    3732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:26.947328    3732 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:26.947340    3732 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:26.947381    3732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:26.947398    3732 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:26.947406    3732 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:26.947721    3732 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:27.062840    3732 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:27.143750    3732 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:27.143756    3732 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:27.143892    3732 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2
	I0906 16:54:27.152479    3732 main.go:141] libmachine: STDOUT: 
	I0906 16:54:27.152491    3732 main.go:141] libmachine: STDERR: 
	I0906 16:54:27.152542    3732 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2 +20000M
	I0906 16:54:27.159677    3732 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:27.159688    3732 main.go:141] libmachine: STDERR: 
	I0906 16:54:27.159706    3732 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2
	I0906 16:54:27.159713    3732 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:27.159753    3732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:21:cf:79:35:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2
	I0906 16:54:27.161322    3732 main.go:141] libmachine: STDOUT: 
	I0906 16:54:27.161333    3732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:27.161351    3732 client.go:171] LocalClient.Create took 214.113833ms
	I0906 16:54:29.163414    3732 start.go:128] duration metric: createHost completed in 2.239679375s
	I0906 16:54:29.163747    3732 start.go:83] releasing machines lock for "bridge-967000", held for 2.239798125s
	W0906 16:54:29.163817    3732 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:29.174987    3732 out.go:177] * Deleting "bridge-967000" in qemu2 ...
	W0906 16:54:29.194141    3732 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:29.194174    3732 start.go:687] Will try again in 5 seconds ...
	I0906 16:54:34.196234    3732 start.go:365] acquiring machines lock for bridge-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:34.196771    3732 start.go:369] acquired machines lock for "bridge-967000" in 412.458µs
	I0906 16:54:34.196908    3732 start.go:93] Provisioning new machine with config: &{Name:bridge-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:34.197266    3732 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:34.208779    3732 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:34.256212    3732 start.go:159] libmachine.API.Create for "bridge-967000" (driver="qemu2")
	I0906 16:54:34.256268    3732 client.go:168] LocalClient.Create starting
	I0906 16:54:34.256397    3732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:34.256454    3732 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:34.256474    3732 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:34.256556    3732 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:34.256595    3732 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:34.256611    3732 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:34.257114    3732 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:34.383022    3732 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:34.448684    3732 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:34.448689    3732 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:34.448836    3732 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2
	I0906 16:54:34.457320    3732 main.go:141] libmachine: STDOUT: 
	I0906 16:54:34.457333    3732 main.go:141] libmachine: STDERR: 
	I0906 16:54:34.457399    3732 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2 +20000M
	I0906 16:54:34.464546    3732 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:34.464561    3732 main.go:141] libmachine: STDERR: 
	I0906 16:54:34.464574    3732 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2
	I0906 16:54:34.464579    3732 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:34.464629    3732 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:8f:d7:f2:94:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/bridge-967000/disk.qcow2
	I0906 16:54:34.466125    3732 main.go:141] libmachine: STDOUT: 
	I0906 16:54:34.466138    3732 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:34.466149    3732 client.go:171] LocalClient.Create took 209.882625ms
	I0906 16:54:36.468233    3732 start.go:128] duration metric: createHost completed in 2.271038625s
	I0906 16:54:36.468297    3732 start.go:83] releasing machines lock for "bridge-967000", held for 2.271590917s
	W0906 16:54:36.468688    3732 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:36.479228    3732 out.go:177] 
	W0906 16:54:36.483348    3732 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:54:36.483370    3732 out.go:239] * 
	* 
	W0906 16:54:36.486187    3732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:54:36.498293    3732 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.724230208s)

                                                
                                                
-- stdout --
	* [kubenet-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-967000 in cluster kubenet-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:54:38.678424    3842 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:54:38.678526    3842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:38.678529    3842 out.go:309] Setting ErrFile to fd 2...
	I0906 16:54:38.678531    3842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:38.678630    3842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:54:38.679601    3842 out.go:303] Setting JSON to false
	I0906 16:54:38.694781    3842 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1452,"bootTime":1694043026,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:54:38.694837    3842 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:54:38.700519    3842 out.go:177] * [kubenet-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:54:38.708376    3842 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:54:38.708430    3842 notify.go:220] Checking for updates...
	I0906 16:54:38.711461    3842 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:54:38.714394    3842 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:54:38.717337    3842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:54:38.720339    3842 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:54:38.723422    3842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:54:38.726641    3842 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:54:38.726685    3842 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:54:38.729390    3842 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:54:38.736263    3842 start.go:298] selected driver: qemu2
	I0906 16:54:38.736268    3842 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:54:38.736273    3842 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:54:38.738187    3842 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:54:38.742395    3842 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:54:38.745499    3842 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:54:38.745537    3842 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0906 16:54:38.745545    3842 start_flags.go:321] config:
	{Name:kubenet-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0906 16:54:38.750033    3842 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:54:38.758378    3842 out.go:177] * Starting control plane node kubenet-967000 in cluster kubenet-967000
	I0906 16:54:38.761308    3842 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:54:38.761329    3842 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:54:38.761350    3842 cache.go:57] Caching tarball of preloaded images
	I0906 16:54:38.761415    3842 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:54:38.761420    3842 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:54:38.761489    3842 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/kubenet-967000/config.json ...
	I0906 16:54:38.761502    3842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/kubenet-967000/config.json: {Name:mk21f7293c105db030239e4558706ad1ba91f8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:54:38.761711    3842 start.go:365] acquiring machines lock for kubenet-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:38.761741    3842 start.go:369] acquired machines lock for "kubenet-967000" in 24.334µs
	I0906 16:54:38.761752    3842 start.go:93] Provisioning new machine with config: &{Name:kubenet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:38.761788    3842 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:38.765453    3842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:38.781409    3842 start.go:159] libmachine.API.Create for "kubenet-967000" (driver="qemu2")
	I0906 16:54:38.781432    3842 client.go:168] LocalClient.Create starting
	I0906 16:54:38.781497    3842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:38.781528    3842 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:38.781540    3842 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:38.781577    3842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:38.781596    3842 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:38.781604    3842 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:38.781899    3842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:38.894257    3842 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:38.965335    3842 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:38.965340    3842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:38.965464    3842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2
	I0906 16:54:38.973893    3842 main.go:141] libmachine: STDOUT: 
	I0906 16:54:38.973906    3842 main.go:141] libmachine: STDERR: 
	I0906 16:54:38.973963    3842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2 +20000M
	I0906 16:54:38.981105    3842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:38.981117    3842 main.go:141] libmachine: STDERR: 
	I0906 16:54:38.981130    3842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2
	I0906 16:54:38.981136    3842 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:38.981169    3842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:51:26:8c:b9:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2
	I0906 16:54:38.982641    3842 main.go:141] libmachine: STDOUT: 
	I0906 16:54:38.982651    3842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:38.982670    3842 client.go:171] LocalClient.Create took 201.2395ms
	I0906 16:54:40.984802    3842 start.go:128] duration metric: createHost completed in 2.223079584s
	I0906 16:54:40.984886    3842 start.go:83] releasing machines lock for "kubenet-967000", held for 2.223226917s
	W0906 16:54:40.984990    3842 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:40.992600    3842 out.go:177] * Deleting "kubenet-967000" in qemu2 ...
	W0906 16:54:41.011911    3842 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:41.011940    3842 start.go:687] Will try again in 5 seconds ...
	I0906 16:54:46.014054    3842 start.go:365] acquiring machines lock for kubenet-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:46.014625    3842 start.go:369] acquired machines lock for "kubenet-967000" in 450.208µs
	I0906 16:54:46.014788    3842 start.go:93] Provisioning new machine with config: &{Name:kubenet-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:46.015078    3842 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:46.022706    3842 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:46.070670    3842 start.go:159] libmachine.API.Create for "kubenet-967000" (driver="qemu2")
	I0906 16:54:46.070729    3842 client.go:168] LocalClient.Create starting
	I0906 16:54:46.070899    3842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:46.070969    3842 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:46.070994    3842 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:46.071075    3842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:46.071112    3842 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:46.071123    3842 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:46.071653    3842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:46.198473    3842 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:46.317955    3842 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:46.317962    3842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:46.318093    3842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2
	I0906 16:54:46.326636    3842 main.go:141] libmachine: STDOUT: 
	I0906 16:54:46.326655    3842 main.go:141] libmachine: STDERR: 
	I0906 16:54:46.326709    3842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2 +20000M
	I0906 16:54:46.333943    3842 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:46.333956    3842 main.go:141] libmachine: STDERR: 
	I0906 16:54:46.333975    3842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2
	I0906 16:54:46.333984    3842 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:46.334024    3842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:56:d7:9a:4f:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/kubenet-967000/disk.qcow2
	I0906 16:54:46.335559    3842 main.go:141] libmachine: STDOUT: 
	I0906 16:54:46.335574    3842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:46.335592    3842 client.go:171] LocalClient.Create took 264.868625ms
	I0906 16:54:48.337665    3842 start.go:128] duration metric: createHost completed in 2.322662s
	I0906 16:54:48.337731    3842 start.go:83] releasing machines lock for "kubenet-967000", held for 2.323175958s
	W0906 16:54:48.338159    3842 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:48.345678    3842 out.go:177] 
	W0906 16:54:48.349763    3842 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:54:48.349795    3842 out.go:239] * 
	* 
	W0906 16:54:48.352473    3842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:54:48.360691    3842 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.688262291s)

                                                
                                                
-- stdout --
	* [custom-flannel-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-967000 in cluster custom-flannel-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:54:50.529135    3952 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:54:50.529248    3952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:50.529251    3952 out.go:309] Setting ErrFile to fd 2...
	I0906 16:54:50.529253    3952 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:54:50.529369    3952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:54:50.530398    3952 out.go:303] Setting JSON to false
	I0906 16:54:50.545391    3952 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1464,"bootTime":1694043026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:54:50.545461    3952 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:54:50.551062    3952 out.go:177] * [custom-flannel-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:54:50.558095    3952 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:54:50.558141    3952 notify.go:220] Checking for updates...
	I0906 16:54:50.564040    3952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:54:50.567073    3952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:54:50.569995    3952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:54:50.573076    3952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:54:50.576063    3952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:54:50.579328    3952 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:54:50.579372    3952 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:54:50.583018    3952 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:54:50.590013    3952 start.go:298] selected driver: qemu2
	I0906 16:54:50.590019    3952 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:54:50.590027    3952 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:54:50.591886    3952 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:54:50.595092    3952 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:54:50.598106    3952 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:54:50.598130    3952 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0906 16:54:50.598154    3952 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0906 16:54:50.598163    3952 start_flags.go:321] config:
	{Name:custom-flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:54:50.602297    3952 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:54:50.611028    3952 out.go:177] * Starting control plane node custom-flannel-967000 in cluster custom-flannel-967000
	I0906 16:54:50.614829    3952 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:54:50.614851    3952 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:54:50.614865    3952 cache.go:57] Caching tarball of preloaded images
	I0906 16:54:50.614931    3952 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:54:50.614937    3952 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:54:50.614994    3952 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/custom-flannel-967000/config.json ...
	I0906 16:54:50.615006    3952 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/custom-flannel-967000/config.json: {Name:mk4eac38f615b702385edfbde063ebcb684d99e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:54:50.615200    3952 start.go:365] acquiring machines lock for custom-flannel-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:50.615231    3952 start.go:369] acquired machines lock for "custom-flannel-967000" in 24.209µs
	I0906 16:54:50.615242    3952 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:50.615269    3952 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:50.622924    3952 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:50.638942    3952 start.go:159] libmachine.API.Create for "custom-flannel-967000" (driver="qemu2")
	I0906 16:54:50.638962    3952 client.go:168] LocalClient.Create starting
	I0906 16:54:50.639025    3952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:50.639051    3952 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:50.639061    3952 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:50.639104    3952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:50.639123    3952 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:50.639132    3952 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:50.639479    3952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:50.759146    3952 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:50.862092    3952 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:50.862098    3952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:50.862227    3952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0906 16:54:50.870630    3952 main.go:141] libmachine: STDOUT: 
	I0906 16:54:50.870644    3952 main.go:141] libmachine: STDERR: 
	I0906 16:54:50.870693    3952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2 +20000M
	I0906 16:54:50.877799    3952 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:50.877821    3952 main.go:141] libmachine: STDERR: 
	I0906 16:54:50.877833    3952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0906 16:54:50.877839    3952 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:50.877879    3952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:56:f4:65:4a:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0906 16:54:50.879373    3952 main.go:141] libmachine: STDOUT: 
	I0906 16:54:50.879387    3952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:50.879406    3952 client.go:171] LocalClient.Create took 240.441375ms
	I0906 16:54:52.881478    3952 start.go:128] duration metric: createHost completed in 2.26628575s
	I0906 16:54:52.881777    3952 start.go:83] releasing machines lock for "custom-flannel-967000", held for 2.266628709s
	W0906 16:54:52.881834    3952 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:52.888988    3952 out.go:177] * Deleting "custom-flannel-967000" in qemu2 ...
	W0906 16:54:52.910749    3952 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:54:52.910773    3952 start.go:687] Will try again in 5 seconds ...
	I0906 16:54:57.912769    3952 start.go:365] acquiring machines lock for custom-flannel-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:54:57.913248    3952 start.go:369] acquired machines lock for "custom-flannel-967000" in 379.208µs
	I0906 16:54:57.913895    3952 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:54:57.914196    3952 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:54:57.924636    3952 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:54:57.970576    3952 start.go:159] libmachine.API.Create for "custom-flannel-967000" (driver="qemu2")
	I0906 16:54:57.970617    3952 client.go:168] LocalClient.Create starting
	I0906 16:54:57.970736    3952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:54:57.970805    3952 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:57.970838    3952 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:57.970914    3952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:54:57.970959    3952 main.go:141] libmachine: Decoding PEM data...
	I0906 16:54:57.970976    3952 main.go:141] libmachine: Parsing certificate...
	I0906 16:54:57.971480    3952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:54:58.097843    3952 main.go:141] libmachine: Creating SSH key...
	I0906 16:54:58.130358    3952 main.go:141] libmachine: Creating Disk image...
	I0906 16:54:58.130363    3952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:54:58.130504    3952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0906 16:54:58.138863    3952 main.go:141] libmachine: STDOUT: 
	I0906 16:54:58.138878    3952 main.go:141] libmachine: STDERR: 
	I0906 16:54:58.138924    3952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2 +20000M
	I0906 16:54:58.146046    3952 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:54:58.146059    3952 main.go:141] libmachine: STDERR: 
	I0906 16:54:58.146073    3952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0906 16:54:58.146079    3952 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:54:58.146123    3952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:2f:4f:47:d0:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/custom-flannel-967000/disk.qcow2
	I0906 16:54:58.147637    3952 main.go:141] libmachine: STDOUT: 
	I0906 16:54:58.147649    3952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:54:58.147661    3952 client.go:171] LocalClient.Create took 177.045583ms
	I0906 16:55:00.149793    3952 start.go:128] duration metric: createHost completed in 2.235647458s
	I0906 16:55:00.149878    3952 start.go:83] releasing machines lock for "custom-flannel-967000", held for 2.236698s
	W0906 16:55:00.150308    3952 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:00.158973    3952 out.go:177] 
	W0906 16:55:00.164006    3952 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:00.164043    3952 out.go:239] * 
	* 
	W0906 16:55:00.166814    3952 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:00.175808    3952 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.6880585s)

                                                
                                                
-- stdout --
	* [calico-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-967000 in cluster calico-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:02.548104    4070 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:02.548208    4070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:02.548211    4070 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:02.548213    4070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:02.548322    4070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:02.549348    4070 out.go:303] Setting JSON to false
	I0906 16:55:02.564343    4070 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1476,"bootTime":1694043026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:02.564427    4070 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:02.572192    4070 out.go:177] * [calico-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:02.576209    4070 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:02.576256    4070 notify.go:220] Checking for updates...
	I0906 16:55:02.580192    4070 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:02.583156    4070 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:02.586233    4070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:02.590140    4070 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:02.593224    4070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:02.596517    4070 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:02.596565    4070 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:02.600111    4070 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:02.607168    4070 start.go:298] selected driver: qemu2
	I0906 16:55:02.607176    4070 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:02.607183    4070 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:02.609157    4070 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:55:02.612099    4070 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:02.616267    4070 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:02.616296    4070 cni.go:84] Creating CNI manager for "calico"
	I0906 16:55:02.616309    4070 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0906 16:55:02.616315    4070 start_flags.go:321] config:
	{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:02.620455    4070 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:02.629125    4070 out.go:177] * Starting control plane node calico-967000 in cluster calico-967000
	I0906 16:55:02.633144    4070 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:02.633169    4070 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:02.633191    4070 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:02.633255    4070 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:02.633261    4070 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:02.633324    4070 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/calico-967000/config.json ...
	I0906 16:55:02.633336    4070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/calico-967000/config.json: {Name:mkf0600e6e048a3fa76ee871544ef7ebb7c239b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:02.633537    4070 start.go:365] acquiring machines lock for calico-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:02.633566    4070 start.go:369] acquired machines lock for "calico-967000" in 23.25µs
	I0906 16:55:02.633579    4070 start.go:93] Provisioning new machine with config: &{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:02.633611    4070 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:02.642182    4070 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:55:02.658428    4070 start.go:159] libmachine.API.Create for "calico-967000" (driver="qemu2")
	I0906 16:55:02.658462    4070 client.go:168] LocalClient.Create starting
	I0906 16:55:02.658533    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:02.658561    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:02.658570    4070 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:02.658618    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:02.658637    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:02.658648    4070 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:02.659254    4070 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:02.773124    4070 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:02.845211    4070 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:02.845216    4070 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:02.845350    4070 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:02.853753    4070 main.go:141] libmachine: STDOUT: 
	I0906 16:55:02.853768    4070 main.go:141] libmachine: STDERR: 
	I0906 16:55:02.853816    4070 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2 +20000M
	I0906 16:55:02.860924    4070 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:02.860938    4070 main.go:141] libmachine: STDERR: 
	I0906 16:55:02.860953    4070 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:02.860959    4070 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:02.860993    4070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c2:6c:4a:7e:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:02.862451    4070 main.go:141] libmachine: STDOUT: 
	I0906 16:55:02.862467    4070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:02.862486    4070 client.go:171] LocalClient.Create took 204.026167ms
	I0906 16:55:04.864583    4070 start.go:128] duration metric: createHost completed in 2.231046917s
	I0906 16:55:04.864645    4070 start.go:83] releasing machines lock for "calico-967000", held for 2.231161416s
	W0906 16:55:04.864733    4070 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:04.876229    4070 out.go:177] * Deleting "calico-967000" in qemu2 ...
	W0906 16:55:04.895880    4070 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:04.895902    4070 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:09.897989    4070 start.go:365] acquiring machines lock for calico-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:10.894071    4070 start.go:369] acquired machines lock for "calico-967000" in 995.960125ms
	I0906 16:55:10.894251    4070 start.go:93] Provisioning new machine with config: &{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:10.894591    4070 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:10.906257    4070 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:55:10.953163    4070 start.go:159] libmachine.API.Create for "calico-967000" (driver="qemu2")
	I0906 16:55:10.953213    4070 client.go:168] LocalClient.Create starting
	I0906 16:55:10.953344    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:10.953394    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:10.953415    4070 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:10.953476    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:10.953511    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:10.953530    4070 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:10.954058    4070 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:11.076514    4070 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:11.148389    4070 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:11.148395    4070 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:11.148540    4070 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:11.157161    4070 main.go:141] libmachine: STDOUT: 
	I0906 16:55:11.157174    4070 main.go:141] libmachine: STDERR: 
	I0906 16:55:11.157230    4070 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2 +20000M
	I0906 16:55:11.164314    4070 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:11.164327    4070 main.go:141] libmachine: STDERR: 
	I0906 16:55:11.164338    4070 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:11.164344    4070 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:11.164384    4070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:5e:dc:a3:20:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:11.165748    4070 main.go:141] libmachine: STDOUT: 
	I0906 16:55:11.165760    4070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:11.165771    4070 client.go:171] LocalClient.Create took 212.562875ms
	I0906 16:55:13.166918    4070 start.go:128] duration metric: createHost completed in 2.272394875s
	I0906 16:55:13.166970    4070 start.go:83] releasing machines lock for "calico-967000", held for 2.272957917s
	W0906 16:55:13.167334    4070 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:13.177799    4070 out.go:177] 
	W0906 16:55:13.181900    4070 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:13.181922    4070 out.go:239] * 
	* 
	W0906 16:55:13.184407    4070 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:13.193831    4070 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe start -p stopped-upgrade-646000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe start -p stopped-upgrade-646000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe: permission denied (8.552875ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe start -p stopped-upgrade-646000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe start -p stopped-upgrade-646000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe: permission denied (7.374375ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe start -p stopped-upgrade-646000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe start -p stopped-upgrade-646000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe: permission denied (8.037375ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.323422622.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-646000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-646000: exit status 85 (113.292875ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo cat                           | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo cat                           | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo cat                           | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo docker                        | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo cat                           | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo cat                           | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo cat                           | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo cat                           | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo                               | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo find                          | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kubenet-967000 sudo crio                          | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kubenet-967000                                    | kubenet-967000        | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT | 06 Sep 23 16:54 PDT |
	| start   | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:54 PDT |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=qemu2                                       |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | cat /etc/nsswitch.conf                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | cat /etc/hosts                                       |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | cat /etc/resolv.conf                                 |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | crictl pods                                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | crictl ps --all                                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | find /etc/cni -type f -exec sh                       |                       |         |         |                     |                     |
	|         | -c 'echo {}; cat {}' \;                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | ip a s                                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | ip r s                                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | iptables -t nat -L -n -v                             |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | cat /run/flannel/subnet.env                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo cat                    | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo cat                    | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo cat                    | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-967000 sudo                        | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-967000                             | custom-flannel-967000 | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT | 06 Sep 23 16:55 PDT |
	| start   | -p calico-967000 --memory=3072                       | calico-967000         | jenkins | v1.31.2 | 06 Sep 23 16:55 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=qemu2                          |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 16:55:02
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 16:55:02.548104    4070 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:02.548208    4070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:02.548211    4070 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:02.548213    4070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:02.548322    4070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:02.549348    4070 out.go:303] Setting JSON to false
	I0906 16:55:02.564343    4070 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1476,"bootTime":1694043026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:02.564427    4070 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:02.572192    4070 out.go:177] * [calico-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:02.576209    4070 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:02.576256    4070 notify.go:220] Checking for updates...
	I0906 16:55:02.580192    4070 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:02.583156    4070 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:02.586233    4070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:02.590140    4070 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:02.593224    4070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:02.596517    4070 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:02.596565    4070 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:02.600111    4070 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:02.607168    4070 start.go:298] selected driver: qemu2
	I0906 16:55:02.607176    4070 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:02.607183    4070 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:02.609157    4070 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:55:02.612099    4070 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:02.616267    4070 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:02.616296    4070 cni.go:84] Creating CNI manager for "calico"
	I0906 16:55:02.616309    4070 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0906 16:55:02.616315    4070 start_flags.go:321] config:
	{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:02.620455    4070 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:02.629125    4070 out.go:177] * Starting control plane node calico-967000 in cluster calico-967000
	I0906 16:55:02.633144    4070 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:02.633169    4070 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:02.633191    4070 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:02.633255    4070 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:02.633261    4070 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:02.633324    4070 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/calico-967000/config.json ...
	I0906 16:55:02.633336    4070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/calico-967000/config.json: {Name:mkf0600e6e048a3fa76ee871544ef7ebb7c239b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:02.633537    4070 start.go:365] acquiring machines lock for calico-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:02.633566    4070 start.go:369] acquired machines lock for "calico-967000" in 23.25µs
	I0906 16:55:02.633579    4070 start.go:93] Provisioning new machine with config: &{Name:calico-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:02.633611    4070 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:02.642182    4070 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:55:02.658428    4070 start.go:159] libmachine.API.Create for "calico-967000" (driver="qemu2")
	I0906 16:55:02.658462    4070 client.go:168] LocalClient.Create starting
	I0906 16:55:02.658533    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:02.658561    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:02.658570    4070 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:02.658618    4070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:02.658637    4070 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:02.658648    4070 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:02.659254    4070 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:02.773124    4070 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:02.845211    4070 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:02.845216    4070 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:02.845350    4070 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:02.853753    4070 main.go:141] libmachine: STDOUT: 
	I0906 16:55:02.853768    4070 main.go:141] libmachine: STDERR: 
	I0906 16:55:02.853816    4070 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2 +20000M
	I0906 16:55:02.860924    4070 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:02.860938    4070 main.go:141] libmachine: STDERR: 
	I0906 16:55:02.860953    4070 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:02.860959    4070 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:02.860993    4070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c2:6c:4a:7e:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/calico-967000/disk.qcow2
	I0906 16:55:02.862451    4070 main.go:141] libmachine: STDOUT: 
	I0906 16:55:02.862467    4070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:02.862486    4070 client.go:171] LocalClient.Create took 204.026167ms
	I0906 16:55:04.864583    4070 start.go:128] duration metric: createHost completed in 2.231046917s
	I0906 16:55:04.864645    4070 start.go:83] releasing machines lock for "calico-967000", held for 2.231161416s
	W0906 16:55:04.864733    4070 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:04.876229    4070 out.go:177] * Deleting "calico-967000" in qemu2 ...
	W0906 16:55:04.895880    4070 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:04.895902    4070 start.go:687] Will try again in 5 seconds ...
	
	* 
	* Profile "stopped-upgrade-646000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-646000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (11.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-967000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (11.686243625s)

                                                
                                                
-- stdout --
	* [false-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-967000 in cluster false-967000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:08.593852    4099 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:08.593973    4099 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:08.593975    4099 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:08.593978    4099 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:08.594105    4099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:08.595120    4099 out.go:303] Setting JSON to false
	I0906 16:55:08.610205    4099 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1482,"bootTime":1694043026,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:08.610282    4099 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:08.616539    4099 out.go:177] * [false-967000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:08.619542    4099 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:08.619594    4099 notify.go:220] Checking for updates...
	I0906 16:55:08.624533    4099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:08.628464    4099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:08.632469    4099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:08.633855    4099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:08.637487    4099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:08.640808    4099 config.go:182] Loaded profile config "calico-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:08.640872    4099 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:08.640915    4099 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:08.645311    4099 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:08.652473    4099 start.go:298] selected driver: qemu2
	I0906 16:55:08.652483    4099 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:08.652491    4099 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:08.654526    4099 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:55:08.658262    4099 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:08.661522    4099 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:08.661545    4099 cni.go:84] Creating CNI manager for "false"
	I0906 16:55:08.661550    4099 start_flags.go:321] config:
	{Name:false-967000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:false-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0906 16:55:08.665865    4099 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:08.669535    4099 out.go:177] * Starting control plane node false-967000 in cluster false-967000
	I0906 16:55:08.677522    4099 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:08.677552    4099 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:08.677571    4099 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:08.677652    4099 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:08.677658    4099 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:08.677727    4099 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/false-967000/config.json ...
	I0906 16:55:08.677740    4099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/false-967000/config.json: {Name:mk174db71cbb142166fb0f83466b87c61cf9fde1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:08.677968    4099 start.go:365] acquiring machines lock for false-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:08.677998    4099 start.go:369] acquired machines lock for "false-967000" in 24.458µs
	I0906 16:55:08.678011    4099 start.go:93] Provisioning new machine with config: &{Name:false-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:08.678041    4099 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:08.686488    4099 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:55:08.702465    4099 start.go:159] libmachine.API.Create for "false-967000" (driver="qemu2")
	I0906 16:55:08.702505    4099 client.go:168] LocalClient.Create starting
	I0906 16:55:08.702568    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:08.702590    4099 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:08.702600    4099 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:08.702636    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:08.702657    4099 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:08.702667    4099 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:08.702971    4099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:08.815266    4099 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:08.874046    4099 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:08.874051    4099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:08.874179    4099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2
	I0906 16:55:08.882826    4099 main.go:141] libmachine: STDOUT: 
	I0906 16:55:08.882839    4099 main.go:141] libmachine: STDERR: 
	I0906 16:55:08.882891    4099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2 +20000M
	I0906 16:55:08.889920    4099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:08.889932    4099 main.go:141] libmachine: STDERR: 
	I0906 16:55:08.889948    4099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2
	I0906 16:55:08.889957    4099 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:08.889991    4099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:bd:98:40:81:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2
	I0906 16:55:08.891424    4099 main.go:141] libmachine: STDOUT: 
	I0906 16:55:08.891436    4099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:08.891454    4099 client.go:171] LocalClient.Create took 188.950334ms
	I0906 16:55:10.893823    4099 start.go:128] duration metric: createHost completed in 2.215842791s
	I0906 16:55:10.893891    4099 start.go:83] releasing machines lock for "false-967000", held for 2.215974334s
	W0906 16:55:10.893978    4099 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:10.914223    4099 out.go:177] * Deleting "false-967000" in qemu2 ...
	W0906 16:55:10.930421    4099 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:10.930449    4099 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:15.932372    4099 start.go:365] acquiring machines lock for false-967000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:17.865329    4099 start.go:369] acquired machines lock for "false-967000" in 1.932984167s
	I0906 16:55:17.865500    4099 start.go:93] Provisioning new machine with config: &{Name:false-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-967000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:17.865774    4099 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:17.874292    4099 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0906 16:55:17.918985    4099 start.go:159] libmachine.API.Create for "false-967000" (driver="qemu2")
	I0906 16:55:17.919019    4099 client.go:168] LocalClient.Create starting
	I0906 16:55:17.919165    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:17.919220    4099 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:17.919237    4099 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:17.919313    4099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:17.919363    4099 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:17.919375    4099 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:17.919902    4099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:18.050085    4099 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:18.190723    4099 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:18.190730    4099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:18.190884    4099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2
	I0906 16:55:18.199658    4099 main.go:141] libmachine: STDOUT: 
	I0906 16:55:18.199670    4099 main.go:141] libmachine: STDERR: 
	I0906 16:55:18.199744    4099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2 +20000M
	I0906 16:55:18.206881    4099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:18.206907    4099 main.go:141] libmachine: STDERR: 
	I0906 16:55:18.206919    4099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2
	I0906 16:55:18.206932    4099 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:18.206969    4099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:7f:63:2f:3d:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/false-967000/disk.qcow2
	I0906 16:55:18.208548    4099 main.go:141] libmachine: STDOUT: 
	I0906 16:55:18.208563    4099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:18.208576    4099 client.go:171] LocalClient.Create took 289.550708ms
	I0906 16:55:20.210742    4099 start.go:128] duration metric: createHost completed in 2.345023042s
	I0906 16:55:20.210858    4099 start.go:83] releasing machines lock for "false-967000", held for 2.345582667s
	W0906 16:55:20.211317    4099 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:20.221216    4099 out.go:177] 
	W0906 16:55:20.226219    4099 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:20.226248    4099 out.go:239] * 
	* 
	W0906 16:55:20.228638    4099 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:20.238994    4099 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (11.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (11.682610417s)

                                                
                                                
-- stdout --
	* [old-k8s-version-782000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-782000 in cluster old-k8s-version-782000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:15.539661    4227 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:15.539761    4227 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:15.539764    4227 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:15.539766    4227 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:15.539871    4227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:15.540875    4227 out.go:303] Setting JSON to false
	I0906 16:55:15.556140    4227 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1489,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:15.556207    4227 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:15.561015    4227 out.go:177] * [old-k8s-version-782000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:15.568970    4227 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:15.572942    4227 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:15.569043    4227 notify.go:220] Checking for updates...
	I0906 16:55:15.578871    4227 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:15.581965    4227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:15.585004    4227 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:15.587940    4227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:15.591242    4227 config.go:182] Loaded profile config "false-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:15.591313    4227 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:15.591356    4227 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:15.595986    4227 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:15.602961    4227 start.go:298] selected driver: qemu2
	I0906 16:55:15.602967    4227 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:15.602975    4227 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:15.604900    4227 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:55:15.608003    4227 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:15.609524    4227 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:15.609551    4227 cni.go:84] Creating CNI manager for ""
	I0906 16:55:15.609560    4227 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:55:15.609564    4227 start_flags.go:321] config:
	{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:15.613523    4227 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:15.621037    4227 out.go:177] * Starting control plane node old-k8s-version-782000 in cluster old-k8s-version-782000
	I0906 16:55:15.624832    4227 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:55:15.624852    4227 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 16:55:15.624864    4227 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:15.624930    4227 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:15.624936    4227 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 16:55:15.625001    4227 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/old-k8s-version-782000/config.json ...
	I0906 16:55:15.625014    4227 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/old-k8s-version-782000/config.json: {Name:mk04c478ff9e82715e32a72d64216939dc5eae96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:15.625224    4227 start.go:365] acquiring machines lock for old-k8s-version-782000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:15.625254    4227 start.go:369] acquired machines lock for "old-k8s-version-782000" in 23.959µs
	I0906 16:55:15.625268    4227 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:15.625299    4227 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:15.633971    4227 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:15.649483    4227 start.go:159] libmachine.API.Create for "old-k8s-version-782000" (driver="qemu2")
	I0906 16:55:15.649509    4227 client.go:168] LocalClient.Create starting
	I0906 16:55:15.649581    4227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:15.649611    4227 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:15.649626    4227 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:15.649670    4227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:15.649689    4227 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:15.649697    4227 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:15.650031    4227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:15.764306    4227 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:15.845794    4227 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:15.845800    4227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:15.845928    4227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:15.854337    4227 main.go:141] libmachine: STDOUT: 
	I0906 16:55:15.854350    4227 main.go:141] libmachine: STDERR: 
	I0906 16:55:15.854403    4227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2 +20000M
	I0906 16:55:15.861444    4227 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:15.861461    4227 main.go:141] libmachine: STDERR: 
	I0906 16:55:15.861474    4227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:15.861483    4227 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:15.861516    4227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0a:fe:92:88:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:15.863005    4227 main.go:141] libmachine: STDOUT: 
	I0906 16:55:15.863016    4227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:15.863034    4227 client.go:171] LocalClient.Create took 213.527959ms
	I0906 16:55:17.865107    4227 start.go:128] duration metric: createHost completed in 2.239878042s
	I0906 16:55:17.865182    4227 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 2.240009625s
	W0906 16:55:17.865253    4227 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:17.883109    4227 out.go:177] * Deleting "old-k8s-version-782000" in qemu2 ...
	W0906 16:55:17.898272    4227 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:17.898294    4227 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:22.898741    4227 start.go:365] acquiring machines lock for old-k8s-version-782000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:24.786808    4227 start.go:369] acquired machines lock for "old-k8s-version-782000" in 1.888102917s
	I0906 16:55:24.786956    4227 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:24.787246    4227 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:24.793000    4227 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:24.841570    4227 start.go:159] libmachine.API.Create for "old-k8s-version-782000" (driver="qemu2")
	I0906 16:55:24.841625    4227 client.go:168] LocalClient.Create starting
	I0906 16:55:24.841766    4227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:24.841843    4227 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:24.841863    4227 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:24.841942    4227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:24.841990    4227 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:24.842013    4227 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:24.842569    4227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:24.968113    4227 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:25.132569    4227 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:25.132576    4227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:25.132717    4227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:25.141241    4227 main.go:141] libmachine: STDOUT: 
	I0906 16:55:25.141257    4227 main.go:141] libmachine: STDERR: 
	I0906 16:55:25.141310    4227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2 +20000M
	I0906 16:55:25.148765    4227 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:25.148779    4227 main.go:141] libmachine: STDERR: 
	I0906 16:55:25.148797    4227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:25.148806    4227 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:25.148851    4227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:03:37:b6:db:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:25.150403    4227 main.go:141] libmachine: STDOUT: 
	I0906 16:55:25.150415    4227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:25.150427    4227 client.go:171] LocalClient.Create took 308.807542ms
	I0906 16:55:27.152617    4227 start.go:128] duration metric: createHost completed in 2.365398208s
	I0906 16:55:27.152691    4227 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 2.365944s
	W0906 16:55:27.153043    4227 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:27.164238    4227 out.go:177] 
	W0906 16:55:27.169172    4227 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:27.169201    4227 out.go:239] * 
	* 
	W0906 16:55:27.172596    4227 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:27.181141    4227 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (67.664625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-147000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-147000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.813387417s)

                                                
                                                
-- stdout --
	* [no-preload-147000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-147000 in cluster no-preload-147000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-147000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:22.368159    4337 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:22.368298    4337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:22.368302    4337 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:22.368304    4337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:22.368416    4337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:22.369400    4337 out.go:303] Setting JSON to false
	I0906 16:55:22.384360    4337 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1496,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:22.384427    4337 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:22.389187    4337 out.go:177] * [no-preload-147000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:22.396136    4337 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:22.400183    4337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:22.396189    4337 notify.go:220] Checking for updates...
	I0906 16:55:22.406144    4337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:22.409498    4337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:22.412167    4337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:22.413246    4337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:22.416441    4337 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:22.416508    4337 config.go:182] Loaded profile config "old-k8s-version-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 16:55:22.416547    4337 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:22.420230    4337 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:22.425174    4337 start.go:298] selected driver: qemu2
	I0906 16:55:22.425178    4337 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:22.425185    4337 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:22.427354    4337 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:55:22.430144    4337 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:22.433368    4337 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:22.433391    4337 cni.go:84] Creating CNI manager for ""
	I0906 16:55:22.433398    4337 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:22.433402    4337 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:55:22.433408    4337 start_flags.go:321] config:
	{Name:no-preload-147000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:22.437776    4337 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.445179    4337 out.go:177] * Starting control plane node no-preload-147000 in cluster no-preload-147000
	I0906 16:55:22.449175    4337 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:22.449263    4337 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/no-preload-147000/config.json ...
	I0906 16:55:22.449278    4337 cache.go:107] acquiring lock: {Name:mkcea8adbcf4473108cd77501b079c01923dd55c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449293    4337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/no-preload-147000/config.json: {Name:mk8cb3308f9ac9601d76b3c64f1f268c7977079c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:22.449302    4337 cache.go:107] acquiring lock: {Name:mk988efe56944e63e2eda24e0bbac05842f50ad1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449297    4337 cache.go:107] acquiring lock: {Name:mke1670fee36de6e57bcaea343db0a8f840100ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449456    4337 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0906 16:55:22.449471    4337 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0906 16:55:22.449487    4337 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0906 16:55:22.449278    4337 cache.go:107] acquiring lock: {Name:mk1f6a556529b28267c0ce8bc4cb4fdcd11f223f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449524    4337 start.go:365] acquiring machines lock for no-preload-147000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:22.449522    4337 cache.go:107] acquiring lock: {Name:mk74500bdafb014dd7e13f8bdfe60f03482508f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449556    4337 start.go:369] acquired machines lock for "no-preload-147000" in 24.417µs
	I0906 16:55:22.449556    4337 cache.go:107] acquiring lock: {Name:mkc371d47571f0811e57e940a177d42ce1e6af2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449433    4337 cache.go:107] acquiring lock: {Name:mk96c30c35db4f93ae2290bab10105e257ed64d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449647    4337 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 16:55:22.449661    4337 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 383.625µs
	I0906 16:55:22.449669    4337 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 16:55:22.449669    4337 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0906 16:55:22.449497    4337 cache.go:107] acquiring lock: {Name:mk22ee94cb15468dc37bad4ace531f6b08dda096 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:22.449723    4337 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0906 16:55:22.449567    4337 start.go:93] Provisioning new machine with config: &{Name:no-preload-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:22.449712    4337 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0906 16:55:22.449750    4337 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:22.449781    4337 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0906 16:55:22.458147    4337 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:22.464060    4337 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0906 16:55:22.464588    4337 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0906 16:55:22.467937    4337 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0906 16:55:22.468066    4337 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0906 16:55:22.468159    4337 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0906 16:55:22.468388    4337 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0906 16:55:22.468461    4337 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0906 16:55:22.473640    4337 start.go:159] libmachine.API.Create for "no-preload-147000" (driver="qemu2")
	I0906 16:55:22.473660    4337 client.go:168] LocalClient.Create starting
	I0906 16:55:22.473715    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:22.473739    4337 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:22.473749    4337 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:22.473784    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:22.473801    4337 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:22.473808    4337 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:22.474158    4337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:22.593703    4337 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:22.766259    4337 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:22.766304    4337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:22.767516    4337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:22.776722    4337 main.go:141] libmachine: STDOUT: 
	I0906 16:55:22.776740    4337 main.go:141] libmachine: STDERR: 
	I0906 16:55:22.776833    4337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2 +20000M
	I0906 16:55:22.784611    4337 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:22.784628    4337 main.go:141] libmachine: STDERR: 
	I0906 16:55:22.784651    4337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:22.784659    4337 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:22.784694    4337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:ff:97:ac:f7:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:22.786366    4337 main.go:141] libmachine: STDOUT: 
	I0906 16:55:22.786384    4337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:22.786403    4337 client.go:171] LocalClient.Create took 312.750958ms
	I0906 16:55:23.038058    4337 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0906 16:55:23.090927    4337 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0906 16:55:23.227174    4337 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0906 16:55:23.227195    4337 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 777.948417ms
	I0906 16:55:23.227204    4337 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0906 16:55:23.292464    4337 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1
	I0906 16:55:23.491991    4337 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0906 16:55:23.779869    4337 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1
	I0906 16:55:23.921746    4337 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1
	I0906 16:55:24.134012    4337 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0906 16:55:24.786565    4337 start.go:128] duration metric: createHost completed in 2.336855417s
	I0906 16:55:24.786620    4337 start.go:83] releasing machines lock for "no-preload-147000", held for 2.337152583s
	W0906 16:55:24.786685    4337 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:24.802261    4337 out.go:177] * Deleting "no-preload-147000" in qemu2 ...
	W0906 16:55:24.819057    4337 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:24.819085    4337 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:25.963534    4337 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0906 16:55:25.963584    4337 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.5142525s
	I0906 16:55:25.963613    4337 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0906 16:55:26.251414    4337 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0906 16:55:26.251471    4337 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 3.802137667s
	I0906 16:55:26.251497    4337 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0906 16:55:26.727324    4337 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0906 16:55:26.727409    4337 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 4.278166292s
	I0906 16:55:26.727440    4337 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0906 16:55:27.266031    4337 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0906 16:55:27.266050    4337 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 4.816972167s
	I0906 16:55:27.266058    4337 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0906 16:55:28.834412    4337 cache.go:157] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0906 16:55:28.834481    4337 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 6.385330834s
	I0906 16:55:28.834508    4337 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0906 16:55:29.819249    4337 start.go:365] acquiring machines lock for no-preload-147000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:29.819743    4337 start.go:369] acquired machines lock for "no-preload-147000" in 408µs
	I0906 16:55:29.819871    4337 start.go:93] Provisioning new machine with config: &{Name:no-preload-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:29.820146    4337 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:29.829719    4337 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:29.877451    4337 start.go:159] libmachine.API.Create for "no-preload-147000" (driver="qemu2")
	I0906 16:55:29.877504    4337 client.go:168] LocalClient.Create starting
	I0906 16:55:29.877621    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:29.877693    4337 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:29.877734    4337 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:29.877840    4337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:29.877875    4337 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:29.877895    4337 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:29.878506    4337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:30.003257    4337 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:30.093048    4337 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:30.093060    4337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:30.093200    4337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:30.101892    4337 main.go:141] libmachine: STDOUT: 
	I0906 16:55:30.101906    4337 main.go:141] libmachine: STDERR: 
	I0906 16:55:30.101954    4337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2 +20000M
	I0906 16:55:30.109203    4337 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:30.109225    4337 main.go:141] libmachine: STDERR: 
	I0906 16:55:30.109234    4337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:30.109241    4337 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:30.109287    4337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:67:2f:bb:9c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:30.110841    4337 main.go:141] libmachine: STDOUT: 
	I0906 16:55:30.110855    4337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:30.110866    4337 client.go:171] LocalClient.Create took 233.36625ms
	I0906 16:55:32.110922    4337 start.go:128] duration metric: createHost completed in 2.290849958s
	I0906 16:55:32.110977    4337 start.go:83] releasing machines lock for "no-preload-147000", held for 2.29130425s
	W0906 16:55:32.111308    4337 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-147000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-147000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:32.121884    4337 out.go:177] 
	W0906 16:55:32.126113    4337 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:32.126149    4337 out.go:239] * 
	* 
	W0906 16:55:32.129117    4337 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:32.139043    4337 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-147000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (63.104542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-782000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-782000 create -f testdata/busybox.yaml: exit status 1 (30.901459ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-782000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (28.671834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (28.26425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-782000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-782000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-782000 describe deploy/metrics-server -n kube-system: exit status 1 (26.130583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-782000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (28.651583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.188901416s)

                                                
                                                
-- stdout --
	* [old-k8s-version-782000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-782000 in cluster old-k8s-version-782000
	* Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:27.642653    4467 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:27.642761    4467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:27.642765    4467 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:27.642767    4467 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:27.642878    4467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:27.643788    4467 out.go:303] Setting JSON to false
	I0906 16:55:27.658623    4467 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1501,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:27.658673    4467 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:27.662732    4467 out.go:177] * [old-k8s-version-782000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:27.670717    4467 notify.go:220] Checking for updates...
	I0906 16:55:27.674647    4467 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:27.678677    4467 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:27.681670    4467 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:27.689636    4467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:27.697601    4467 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:27.704652    4467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:27.708841    4467 config.go:182] Loaded profile config "old-k8s-version-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 16:55:27.712647    4467 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0906 16:55:27.716683    4467 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:27.720615    4467 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:55:27.727524    4467 start.go:298] selected driver: qemu2
	I0906 16:55:27.727528    4467 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:27.727582    4467 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:27.729670    4467 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:27.729703    4467 cni.go:84] Creating CNI manager for ""
	I0906 16:55:27.729709    4467 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:55:27.729714    4467 start_flags.go:321] config:
	{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-782000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:27.733817    4467 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:27.737642    4467 out.go:177] * Starting control plane node old-k8s-version-782000 in cluster old-k8s-version-782000
	I0906 16:55:27.745662    4467 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:55:27.745680    4467 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 16:55:27.745704    4467 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:27.745766    4467 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:27.745772    4467 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 16:55:27.745855    4467 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/old-k8s-version-782000/config.json ...
	I0906 16:55:27.746099    4467 start.go:365] acquiring machines lock for old-k8s-version-782000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:27.746128    4467 start.go:369] acquired machines lock for "old-k8s-version-782000" in 23.292µs
	I0906 16:55:27.746137    4467 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:27.746142    4467 fix.go:54] fixHost starting: 
	I0906 16:55:27.746261    4467 fix.go:102] recreateIfNeeded on old-k8s-version-782000: state=Stopped err=<nil>
	W0906 16:55:27.746270    4467 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:27.749612    4467 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	I0906 16:55:27.757710    4467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:03:37:b6:db:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:27.759466    4467 main.go:141] libmachine: STDOUT: 
	I0906 16:55:27.759479    4467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:27.759509    4467 fix.go:56] fixHost completed within 13.367333ms
	I0906 16:55:27.759515    4467 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 13.383792ms
	W0906 16:55:27.759522    4467 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:27.759559    4467 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:27.759563    4467 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:32.761393    4467 start.go:365] acquiring machines lock for old-k8s-version-782000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:32.761461    4467 start.go:369] acquired machines lock for "old-k8s-version-782000" in 49.292µs
	I0906 16:55:32.761482    4467 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:32.761486    4467 fix.go:54] fixHost starting: 
	I0906 16:55:32.761631    4467 fix.go:102] recreateIfNeeded on old-k8s-version-782000: state=Stopped err=<nil>
	W0906 16:55:32.761636    4467 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:32.765787    4467 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	I0906 16:55:32.773775    4467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:03:37:b6:db:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0906 16:55:32.775461    4467 main.go:141] libmachine: STDOUT: 
	I0906 16:55:32.775473    4467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:32.775491    4467 fix.go:56] fixHost completed within 14.005792ms
	I0906 16:55:32.775496    4467 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 14.031584ms
	W0906 16:55:32.775548    4467 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:32.783666    4467 out.go:177] 
	W0906 16:55:32.787880    4467 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:32.787895    4467 out.go:239] * 
	* 
	W0906 16:55:32.788429    4467 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:32.798704    4467 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (30.125208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-147000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-147000 create -f testdata/busybox.yaml: exit status 1 (29.721875ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-147000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (28.742875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (28.5825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-147000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-147000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-147000 describe deploy/metrics-server -n kube-system: exit status 1 (25.748792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-147000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-147000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (29.188666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-147000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-147000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.178534833s)

                                                
                                                
-- stdout --
	* [no-preload-147000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-147000 in cluster no-preload-147000
	* Restarting existing qemu2 VM for "no-preload-147000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-147000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:32.594868    4496 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:32.594989    4496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:32.594992    4496 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:32.594994    4496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:32.595107    4496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:32.596031    4496 out.go:303] Setting JSON to false
	I0906 16:55:32.610964    4496 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1506,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:32.611030    4496 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:32.614926    4496 out.go:177] * [no-preload-147000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:32.621958    4496 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:32.625932    4496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:32.622002    4496 notify.go:220] Checking for updates...
	I0906 16:55:32.631946    4496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:32.634889    4496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:32.637929    4496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:32.640970    4496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:32.642537    4496 config.go:182] Loaded profile config "no-preload-147000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:32.642766    4496 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:32.646927    4496 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:55:32.653760    4496 start.go:298] selected driver: qemu2
	I0906 16:55:32.653765    4496 start.go:902] validating driver "qemu2" against &{Name:no-preload-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-147000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:32.653815    4496 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:32.655693    4496 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:32.655721    4496 cni.go:84] Creating CNI manager for ""
	I0906 16:55:32.655727    4496 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:32.655732    4496 start_flags.go:321] config:
	{Name:no-preload-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-147000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:32.659570    4496 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.666931    4496 out.go:177] * Starting control plane node no-preload-147000 in cluster no-preload-147000
	I0906 16:55:32.670880    4496 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:32.670966    4496 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/no-preload-147000/config.json ...
	I0906 16:55:32.670963    4496 cache.go:107] acquiring lock: {Name:mk1f6a556529b28267c0ce8bc4cb4fdcd11f223f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.670963    4496 cache.go:107] acquiring lock: {Name:mk96c30c35db4f93ae2290bab10105e257ed64d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.670995    4496 cache.go:107] acquiring lock: {Name:mkc371d47571f0811e57e940a177d42ce1e6af2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.671012    4496 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0906 16:55:32.671016    4496 cache.go:107] acquiring lock: {Name:mkcea8adbcf4473108cd77501b079c01923dd55c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.671020    4496 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 57.625µs
	I0906 16:55:32.671026    4496 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0906 16:55:32.671033    4496 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0906 16:55:32.671033    4496 cache.go:107] acquiring lock: {Name:mke1670fee36de6e57bcaea343db0a8f840100ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.671039    4496 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 78.25µs
	I0906 16:55:32.671046    4496 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0906 16:55:32.671053    4496 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0906 16:55:32.671052    4496 cache.go:107] acquiring lock: {Name:mk74500bdafb014dd7e13f8bdfe60f03482508f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.671057    4496 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 42.75µs
	I0906 16:55:32.671067    4496 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0906 16:55:32.671069    4496 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0906 16:55:32.671112    4496 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 79.5µs
	I0906 16:55:32.671118    4496 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0906 16:55:32.671095    4496 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0906 16:55:32.671122    4496 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 145.792µs
	I0906 16:55:32.671125    4496 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0906 16:55:32.671096    4496 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0906 16:55:32.671128    4496 cache.go:107] acquiring lock: {Name:mk22ee94cb15468dc37bad4ace531f6b08dda096 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.671128    4496 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 77.041µs
	I0906 16:55:32.671146    4496 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0906 16:55:32.671170    4496 cache.go:115] /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0906 16:55:32.671176    4496 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 68.541µs
	I0906 16:55:32.671185    4496 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0906 16:55:32.671177    4496 cache.go:107] acquiring lock: {Name:mk988efe56944e63e2eda24e0bbac05842f50ad1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:32.671228    4496 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0906 16:55:32.671287    4496 start.go:365] acquiring machines lock for no-preload-147000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:32.671316    4496 start.go:369] acquired machines lock for "no-preload-147000" in 22.917µs
	I0906 16:55:32.671330    4496 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:32.671335    4496 fix.go:54] fixHost starting: 
	I0906 16:55:32.671449    4496 fix.go:102] recreateIfNeeded on no-preload-147000: state=Stopped err=<nil>
	W0906 16:55:32.671456    4496 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:32.675954    4496 out.go:177] * Restarting existing qemu2 VM for "no-preload-147000" ...
	I0906 16:55:32.683987    4496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:67:2f:bb:9c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:32.684792    4496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0906 16:55:32.686224    4496 main.go:141] libmachine: STDOUT: 
	I0906 16:55:32.686243    4496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:32.686276    4496 fix.go:56] fixHost completed within 14.941917ms
	I0906 16:55:32.686282    4496 start.go:83] releasing machines lock for "no-preload-147000", held for 14.962125ms
	W0906 16:55:32.686291    4496 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:32.686337    4496 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:32.686341    4496 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:33.227490    4496 cache.go:162] opening:  /Users/jenkins/minikube-integration/17174-979/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0906 16:55:37.686458    4496 start.go:365] acquiring machines lock for no-preload-147000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:37.686826    4496 start.go:369] acquired machines lock for "no-preload-147000" in 299.791µs
	I0906 16:55:37.686943    4496 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:37.686967    4496 fix.go:54] fixHost starting: 
	I0906 16:55:37.687657    4496 fix.go:102] recreateIfNeeded on no-preload-147000: state=Stopped err=<nil>
	W0906 16:55:37.687685    4496 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:37.696604    4496 out.go:177] * Restarting existing qemu2 VM for "no-preload-147000" ...
	I0906 16:55:37.700717    4496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:67:2f:bb:9c:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/no-preload-147000/disk.qcow2
	I0906 16:55:37.710274    4496 main.go:141] libmachine: STDOUT: 
	I0906 16:55:37.710340    4496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:37.710413    4496 fix.go:56] fixHost completed within 23.452625ms
	I0906 16:55:37.710427    4496 start.go:83] releasing machines lock for "no-preload-147000", held for 23.581042ms
	W0906 16:55:37.710717    4496 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-147000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-147000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:37.717566    4496 out.go:177] 
	W0906 16:55:37.721775    4496 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:37.721805    4496 out.go:239] * 
	* 
	W0906 16:55:37.724470    4496 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:37.733609    4496 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-147000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (66.677958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-782000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (29.03575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-782000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.778791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (29.518125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-782000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-782000 "sudo crictl images -o json": exit status 89 (37.92625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-782000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-782000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-782000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (28.594375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-782000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-782000 --alsologtostderr -v=1: exit status 89 (39.717583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-782000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:33.016244    4529 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:33.016618    4529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:33.016621    4529 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:33.016623    4529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:33.016764    4529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:33.016979    4529 out.go:303] Setting JSON to false
	I0906 16:55:33.016987    4529 mustload.go:65] Loading cluster: old-k8s-version-782000
	I0906 16:55:33.017146    4529 config.go:182] Loaded profile config "old-k8s-version-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0906 16:55:33.021801    4529 out.go:177] * The control plane node must be running for this command
	I0906 16:55:33.025895    4529 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-782000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-782000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (28.446959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (28.447833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-571000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-571000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.908224334s)

                                                
                                                
-- stdout --
	* [embed-certs-571000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-571000 in cluster embed-certs-571000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-571000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:33.486542    4552 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:33.486646    4552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:33.486649    4552 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:33.486652    4552 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:33.486758    4552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:33.487794    4552 out.go:303] Setting JSON to false
	I0906 16:55:33.503072    4552 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1507,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:33.503132    4552 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:33.507718    4552 out.go:177] * [embed-certs-571000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:33.514739    4552 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:33.514797    4552 notify.go:220] Checking for updates...
	I0906 16:55:33.520663    4552 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:33.523718    4552 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:33.526689    4552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:33.529663    4552 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:33.536627    4552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:33.540034    4552 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:33.540106    4552 config.go:182] Loaded profile config "no-preload-147000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:33.540142    4552 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:33.544681    4552 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:33.551647    4552 start.go:298] selected driver: qemu2
	I0906 16:55:33.551653    4552 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:33.551661    4552 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:33.553853    4552 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:55:33.556759    4552 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:33.559719    4552 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:33.559756    4552 cni.go:84] Creating CNI manager for ""
	I0906 16:55:33.559765    4552 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:33.559769    4552 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:55:33.559784    4552 start_flags.go:321] config:
	{Name:embed-certs-571000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:33.564278    4552 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:33.571681    4552 out.go:177] * Starting control plane node embed-certs-571000 in cluster embed-certs-571000
	I0906 16:55:33.575675    4552 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:33.575692    4552 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:33.575710    4552 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:33.575762    4552 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:33.575767    4552 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:33.575828    4552 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/embed-certs-571000/config.json ...
	I0906 16:55:33.575840    4552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/embed-certs-571000/config.json: {Name:mkf3abc96d4050cd80d8498325511e0084ef787a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:33.576041    4552 start.go:365] acquiring machines lock for embed-certs-571000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:33.576070    4552 start.go:369] acquired machines lock for "embed-certs-571000" in 23.667µs
	I0906 16:55:33.576081    4552 start.go:93] Provisioning new machine with config: &{Name:embed-certs-571000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:33.576113    4552 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:33.584669    4552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:33.600392    4552 start.go:159] libmachine.API.Create for "embed-certs-571000" (driver="qemu2")
	I0906 16:55:33.600408    4552 client.go:168] LocalClient.Create starting
	I0906 16:55:33.600469    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:33.600501    4552 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:33.600519    4552 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:33.600565    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:33.600585    4552 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:33.600594    4552 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:33.600951    4552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:33.716119    4552 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:33.927657    4552 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:33.927667    4552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:33.927833    4552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:33.936573    4552 main.go:141] libmachine: STDOUT: 
	I0906 16:55:33.936589    4552 main.go:141] libmachine: STDERR: 
	I0906 16:55:33.936656    4552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2 +20000M
	I0906 16:55:33.943888    4552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:33.943899    4552 main.go:141] libmachine: STDERR: 
	I0906 16:55:33.943912    4552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:33.943918    4552 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:33.943959    4552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:ef:fa:1e:1f:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:33.945496    4552 main.go:141] libmachine: STDOUT: 
	I0906 16:55:33.945511    4552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:33.945532    4552 client.go:171] LocalClient.Create took 345.133458ms
	I0906 16:55:35.947606    4552 start.go:128] duration metric: createHost completed in 2.371573083s
	I0906 16:55:35.947713    4552 start.go:83] releasing machines lock for "embed-certs-571000", held for 2.371693333s
	W0906 16:55:35.947778    4552 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:35.955348    4552 out.go:177] * Deleting "embed-certs-571000" in qemu2 ...
	W0906 16:55:35.975250    4552 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:35.975282    4552 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:40.977332    4552 start.go:365] acquiring machines lock for embed-certs-571000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:41.051723    4552 start.go:369] acquired machines lock for "embed-certs-571000" in 74.209625ms
	I0906 16:55:41.051912    4552 start.go:93] Provisioning new machine with config: &{Name:embed-certs-571000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:41.052172    4552 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:41.061677    4552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:41.109394    4552 start.go:159] libmachine.API.Create for "embed-certs-571000" (driver="qemu2")
	I0906 16:55:41.109433    4552 client.go:168] LocalClient.Create starting
	I0906 16:55:41.109563    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:41.109625    4552 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:41.109641    4552 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:41.109712    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:41.109752    4552 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:41.109769    4552 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:41.110226    4552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:41.238740    4552 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:41.307855    4552 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:41.307865    4552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:41.307988    4552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:41.316525    4552 main.go:141] libmachine: STDOUT: 
	I0906 16:55:41.316540    4552 main.go:141] libmachine: STDERR: 
	I0906 16:55:41.316597    4552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2 +20000M
	I0906 16:55:41.323698    4552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:41.323717    4552 main.go:141] libmachine: STDERR: 
	I0906 16:55:41.323732    4552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:41.323737    4552 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:41.323780    4552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7d:01:26:f8:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:41.325352    4552 main.go:141] libmachine: STDOUT: 
	I0906 16:55:41.325369    4552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:41.325382    4552 client.go:171] LocalClient.Create took 215.95275ms
	I0906 16:55:43.327504    4552 start.go:128] duration metric: createHost completed in 2.2753225s
	I0906 16:55:43.327616    4552 start.go:83] releasing machines lock for "embed-certs-571000", held for 2.275931375s
	W0906 16:55:43.328067    4552 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-571000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:43.338532    4552 out.go:177] 
	W0906 16:55:43.343496    4552 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:43.343535    4552 out.go:239] * 
	* 
	W0906 16:55:43.346509    4552 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:43.354420    4552 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-571000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (69.794792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-147000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (31.795125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-147000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-147000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-147000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.822333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-147000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-147000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (28.612709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-147000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-147000 "sudo crictl images -o json": exit status 89 (38.53175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-147000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-147000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-147000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (28.89875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-147000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-147000 --alsologtostderr -v=1: exit status 89 (41.575666ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-147000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:37.997604    4574 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:37.997761    4574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:37.997764    4574 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:37.997766    4574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:37.997874    4574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:37.998079    4574 out.go:303] Setting JSON to false
	I0906 16:55:37.998088    4574 mustload.go:65] Loading cluster: no-preload-147000
	I0906 16:55:37.998662    4574 config.go:182] Loaded profile config "no-preload-147000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:38.003479    4574 out.go:177] * The control plane node must be running for this command
	I0906 16:55:38.007545    4574 out.go:177]   To start a cluster, run: "minikube start -p no-preload-147000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-147000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (28.761917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (28.432542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-147000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-236000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-236000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.932620542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-236000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-236000 in cluster default-k8s-diff-port-236000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-236000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:38.715607    4609 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:38.715717    4609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:38.715720    4609 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:38.715722    4609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:38.715837    4609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:38.716899    4609 out.go:303] Setting JSON to false
	I0906 16:55:38.731850    4609 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1512,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:38.731910    4609 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:38.736235    4609 out.go:177] * [default-k8s-diff-port-236000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:38.739307    4609 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:38.739385    4609 notify.go:220] Checking for updates...
	I0906 16:55:38.747218    4609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:38.750283    4609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:38.754303    4609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:38.757264    4609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:38.760285    4609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:38.764034    4609 config.go:182] Loaded profile config "embed-certs-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:38.764116    4609 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:38.764210    4609 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:38.768213    4609 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:38.775260    4609 start.go:298] selected driver: qemu2
	I0906 16:55:38.775265    4609 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:38.775271    4609 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:38.777285    4609 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:55:38.780200    4609 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:38.784282    4609 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:38.784307    4609 cni.go:84] Creating CNI manager for ""
	I0906 16:55:38.784314    4609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:38.784327    4609 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:55:38.784333    4609 start_flags.go:321] config:
	{Name:default-k8s-diff-port-236000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-236000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:38.788450    4609 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:38.795264    4609 out.go:177] * Starting control plane node default-k8s-diff-port-236000 in cluster default-k8s-diff-port-236000
	I0906 16:55:38.799333    4609 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:38.799354    4609 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:38.799373    4609 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:38.799427    4609 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:38.799433    4609 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:38.799511    4609 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/default-k8s-diff-port-236000/config.json ...
	I0906 16:55:38.799524    4609 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/default-k8s-diff-port-236000/config.json: {Name:mk65b34b78261a5a592b70a63adf7b7d025bf053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:38.799728    4609 start.go:365] acquiring machines lock for default-k8s-diff-port-236000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:38.799761    4609 start.go:369] acquired machines lock for "default-k8s-diff-port-236000" in 23.75µs
	I0906 16:55:38.799777    4609 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-236000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:38.799828    4609 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:38.807325    4609 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:38.821937    4609 start.go:159] libmachine.API.Create for "default-k8s-diff-port-236000" (driver="qemu2")
	I0906 16:55:38.821955    4609 client.go:168] LocalClient.Create starting
	I0906 16:55:38.822029    4609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:38.822066    4609 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:38.822080    4609 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:38.822121    4609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:38.822140    4609 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:38.822147    4609 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:38.822477    4609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:38.937584    4609 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:39.031948    4609 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:39.031958    4609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:39.032085    4609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:39.040585    4609 main.go:141] libmachine: STDOUT: 
	I0906 16:55:39.040601    4609 main.go:141] libmachine: STDERR: 
	I0906 16:55:39.040655    4609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2 +20000M
	I0906 16:55:39.047745    4609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:39.047757    4609 main.go:141] libmachine: STDERR: 
	I0906 16:55:39.047773    4609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:39.047779    4609 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:39.047829    4609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e4:72:07:af:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:39.049438    4609 main.go:141] libmachine: STDOUT: 
	I0906 16:55:39.049451    4609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:39.049469    4609 client.go:171] LocalClient.Create took 227.519167ms
	I0906 16:55:41.051553    4609 start.go:128] duration metric: createHost completed in 2.251795125s
	I0906 16:55:41.051614    4609 start.go:83] releasing machines lock for "default-k8s-diff-port-236000", held for 2.251937208s
	W0906 16:55:41.051702    4609 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:41.070892    4609 out.go:177] * Deleting "default-k8s-diff-port-236000" in qemu2 ...
	W0906 16:55:41.086566    4609 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:41.086604    4609 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:46.088610    4609 start.go:365] acquiring machines lock for default-k8s-diff-port-236000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:46.089021    4609 start.go:369] acquired machines lock for "default-k8s-diff-port-236000" in 292.208µs
	I0906 16:55:46.089137    4609 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-236000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:46.089434    4609 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:46.098968    4609 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:46.145141    4609 start.go:159] libmachine.API.Create for "default-k8s-diff-port-236000" (driver="qemu2")
	I0906 16:55:46.145193    4609 client.go:168] LocalClient.Create starting
	I0906 16:55:46.145357    4609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:46.145424    4609 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:46.145448    4609 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:46.145524    4609 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:46.145559    4609 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:46.145580    4609 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:46.146090    4609 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:46.267829    4609 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:46.562513    4609 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:46.562527    4609 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:46.562696    4609 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:46.571515    4609 main.go:141] libmachine: STDOUT: 
	I0906 16:55:46.571531    4609 main.go:141] libmachine: STDERR: 
	I0906 16:55:46.571595    4609 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2 +20000M
	I0906 16:55:46.578926    4609 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:46.578948    4609 main.go:141] libmachine: STDERR: 
	I0906 16:55:46.578969    4609 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:46.578975    4609 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:46.579018    4609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:40:3e:3c:be:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:46.580599    4609 main.go:141] libmachine: STDOUT: 
	I0906 16:55:46.580613    4609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:46.580627    4609 client.go:171] LocalClient.Create took 435.443792ms
	I0906 16:55:48.582716    4609 start.go:128] duration metric: createHost completed in 2.493341125s
	I0906 16:55:48.582767    4609 start.go:83] releasing machines lock for "default-k8s-diff-port-236000", held for 2.493828291s
	W0906 16:55:48.583187    4609 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:48.591726    4609 out.go:177] 
	W0906 16:55:48.596751    4609 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:48.596881    4609 out.go:239] * 
	* 
	W0906 16:55:48.600036    4609 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:48.608609    4609 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-236000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (64.205833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-571000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-571000 create -f testdata/busybox.yaml: exit status 1 (30.393791ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-571000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (29.053833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (28.749292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-571000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-571000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-571000 describe deploy/metrics-server -n kube-system: exit status 1 (25.914459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-571000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-571000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (28.208208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-571000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-571000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.174474417s)

                                                
                                                
-- stdout --
	* [embed-certs-571000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-571000 in cluster embed-certs-571000
	* Restarting existing qemu2 VM for "embed-certs-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-571000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:43.818122    4641 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:43.818236    4641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:43.818239    4641 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:43.818241    4641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:43.818346    4641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:43.819252    4641 out.go:303] Setting JSON to false
	I0906 16:55:43.834246    4641 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1517,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:43.834325    4641 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:43.838462    4641 out.go:177] * [embed-certs-571000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:43.845489    4641 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:43.849389    4641 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:43.845562    4641 notify.go:220] Checking for updates...
	I0906 16:55:43.853450    4641 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:43.856452    4641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:43.859357    4641 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:43.862412    4641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:43.865728    4641 config.go:182] Loaded profile config "embed-certs-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:43.865988    4641 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:43.870355    4641 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:55:43.877405    4641 start.go:298] selected driver: qemu2
	I0906 16:55:43.877410    4641 start.go:902] validating driver "qemu2" against &{Name:embed-certs-571000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:43.877469    4641 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:43.880396    4641 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:43.880440    4641 cni.go:84] Creating CNI manager for ""
	I0906 16:55:43.880447    4641 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:43.880452    4641 start_flags.go:321] config:
	{Name:embed-certs-571000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-571000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:43.884622    4641 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:43.891421    4641 out.go:177] * Starting control plane node embed-certs-571000 in cluster embed-certs-571000
	I0906 16:55:43.895269    4641 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:43.895288    4641 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:43.895308    4641 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:43.895366    4641 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:43.895379    4641 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:43.895454    4641 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/embed-certs-571000/config.json ...
	I0906 16:55:43.895740    4641 start.go:365] acquiring machines lock for embed-certs-571000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:43.895770    4641 start.go:369] acquired machines lock for "embed-certs-571000" in 24.333µs
	I0906 16:55:43.895780    4641 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:43.895784    4641 fix.go:54] fixHost starting: 
	I0906 16:55:43.895904    4641 fix.go:102] recreateIfNeeded on embed-certs-571000: state=Stopped err=<nil>
	W0906 16:55:43.895912    4641 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:43.902383    4641 out.go:177] * Restarting existing qemu2 VM for "embed-certs-571000" ...
	I0906 16:55:43.906438    4641 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7d:01:26:f8:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:43.908437    4641 main.go:141] libmachine: STDOUT: 
	I0906 16:55:43.908454    4641 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:43.908483    4641 fix.go:56] fixHost completed within 12.698208ms
	I0906 16:55:43.908488    4641 start.go:83] releasing machines lock for "embed-certs-571000", held for 12.714375ms
	W0906 16:55:43.908499    4641 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:43.908527    4641 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:43.908531    4641 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:48.908939    4641 start.go:365] acquiring machines lock for embed-certs-571000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:48.909013    4641 start.go:369] acquired machines lock for "embed-certs-571000" in 47.125µs
	I0906 16:55:48.909024    4641 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:48.909028    4641 fix.go:54] fixHost starting: 
	I0906 16:55:48.909163    4641 fix.go:102] recreateIfNeeded on embed-certs-571000: state=Stopped err=<nil>
	W0906 16:55:48.909168    4641 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:48.917426    4641 out.go:177] * Restarting existing qemu2 VM for "embed-certs-571000" ...
	I0906 16:55:48.918773    4641 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:7d:01:26:f8:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/embed-certs-571000/disk.qcow2
	I0906 16:55:48.920585    4641 main.go:141] libmachine: STDOUT: 
	I0906 16:55:48.920597    4641 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:48.920612    4641 fix.go:56] fixHost completed within 11.583709ms
	I0906 16:55:48.920624    4641 start.go:83] releasing machines lock for "embed-certs-571000", held for 11.608583ms
	W0906 16:55:48.920661    4641 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-571000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:48.933419    4641 out.go:177] 
	W0906 16:55:48.944472    4641 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:48.944478    4641 out.go:239] * 
	* 
	W0906 16:55:48.945030    4641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:48.955424    4641 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-571000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (31.271583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-236000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-236000 create -f testdata/busybox.yaml: exit status 1 (29.326667ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-236000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (28.628167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (28.224834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-236000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-236000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-236000 describe deploy/metrics-server -n kube-system: exit status 1 (25.837ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-236000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-236000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (28.774083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-571000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (30.288959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-571000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-571000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-571000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.251459ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-571000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-571000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (30.7535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-236000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-236000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.2030555s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-236000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-236000 in cluster default-k8s-diff-port-236000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-236000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-236000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:49.095577    4677 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:49.095698    4677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:49.095701    4677 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:49.095704    4677 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:49.095815    4677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:49.096968    4677 out.go:303] Setting JSON to false
	I0906 16:55:49.113683    4677 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1523,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:49.113758    4677 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:49.118453    4677 out.go:177] * [default-k8s-diff-port-236000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:49.129424    4677 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:49.125675    4677 notify.go:220] Checking for updates...
	I0906 16:55:49.140382    4677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:49.148289    4677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:49.155416    4677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:49.158440    4677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:49.161480    4677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:49.164607    4677 config.go:182] Loaded profile config "default-k8s-diff-port-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:49.164853    4677 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:49.168449    4677 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:55:49.175452    4677 start.go:298] selected driver: qemu2
	I0906 16:55:49.175464    4677 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-236000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:49.175535    4677 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:49.177598    4677 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 16:55:49.177627    4677 cni.go:84] Creating CNI manager for ""
	I0906 16:55:49.177632    4677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:49.177641    4677 start_flags.go:321] config:
	{Name:default-k8s-diff-port-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-2360
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:49.181655    4677 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:49.185374    4677 out.go:177] * Starting control plane node default-k8s-diff-port-236000 in cluster default-k8s-diff-port-236000
	I0906 16:55:49.193441    4677 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:49.193532    4677 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:49.193577    4677 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:49.193660    4677 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:49.193666    4677 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:49.193740    4677 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/default-k8s-diff-port-236000/config.json ...
	I0906 16:55:49.194082    4677 start.go:365] acquiring machines lock for default-k8s-diff-port-236000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:49.194114    4677 start.go:369] acquired machines lock for "default-k8s-diff-port-236000" in 22.5µs
	I0906 16:55:49.194123    4677 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:49.194126    4677 fix.go:54] fixHost starting: 
	I0906 16:55:49.194243    4677 fix.go:102] recreateIfNeeded on default-k8s-diff-port-236000: state=Stopped err=<nil>
	W0906 16:55:49.194251    4677 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:49.198436    4677 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-236000" ...
	I0906 16:55:49.206480    4677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:40:3e:3c:be:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:49.208124    4677 main.go:141] libmachine: STDOUT: 
	I0906 16:55:49.208138    4677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:49.208167    4677 fix.go:56] fixHost completed within 14.038333ms
	I0906 16:55:49.208171    4677 start.go:83] releasing machines lock for "default-k8s-diff-port-236000", held for 14.054ms
	W0906 16:55:49.208178    4677 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:49.208219    4677 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:49.208223    4677 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:54.210217    4677 start.go:365] acquiring machines lock for default-k8s-diff-port-236000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:54.210624    4677 start.go:369] acquired machines lock for "default-k8s-diff-port-236000" in 326.084µs
	I0906 16:55:54.210740    4677 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:54.210762    4677 fix.go:54] fixHost starting: 
	I0906 16:55:54.211496    4677 fix.go:102] recreateIfNeeded on default-k8s-diff-port-236000: state=Stopped err=<nil>
	W0906 16:55:54.211523    4677 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:54.223101    4677 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-236000" ...
	I0906 16:55:54.226110    4677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:40:3e:3c:be:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/default-k8s-diff-port-236000/disk.qcow2
	I0906 16:55:54.234773    4677 main.go:141] libmachine: STDOUT: 
	I0906 16:55:54.234833    4677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:54.234941    4677 fix.go:56] fixHost completed within 24.184166ms
	I0906 16:55:54.234972    4677 start.go:83] releasing machines lock for "default-k8s-diff-port-236000", held for 24.324708ms
	W0906 16:55:54.235228    4677 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-236000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-236000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:54.243879    4677 out.go:177] 
	W0906 16:55:54.246995    4677 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:54.247031    4677 out.go:239] * 
	* 
	W0906 16:55:54.249579    4677 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:54.257855    4677 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-236000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (65.652667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-571000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-571000 "sudo crictl images -o json": exit status 89 (59.557917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-571000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-571000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-571000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (31.548333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-571000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-571000 --alsologtostderr -v=1: exit status 89 (46.085125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-571000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:49.204117    4687 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:49.206431    4687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:49.206435    4687 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:49.206437    4687 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:49.206546    4687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:49.206732    4687 out.go:303] Setting JSON to false
	I0906 16:55:49.206742    4687 mustload.go:65] Loading cluster: embed-certs-571000
	I0906 16:55:49.206917    4687 config.go:182] Loaded profile config "embed-certs-571000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:49.210461    4687 out.go:177] * The control plane node must be running for this command
	I0906 16:55:49.218558    4687 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-571000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-571000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (29.969334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (28.999208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-571000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.824806042s)

                                                
                                                
-- stdout --
	* [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-401000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:49.665105    4712 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:49.665233    4712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:49.665236    4712 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:49.665238    4712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:49.665342    4712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:49.666397    4712 out.go:303] Setting JSON to false
	I0906 16:55:49.681802    4712 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1523,"bootTime":1694043026,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:49.681854    4712 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:49.686433    4712 out.go:177] * [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:49.693447    4712 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:49.697243    4712 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:49.693479    4712 notify.go:220] Checking for updates...
	I0906 16:55:49.704364    4712 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:49.705705    4712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:49.708388    4712 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:49.711344    4712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:49.714622    4712 config.go:182] Loaded profile config "default-k8s-diff-port-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:49.714680    4712 config.go:182] Loaded profile config "multinode-994000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:49.714720    4712 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:49.719340    4712 out.go:177] * Using the qemu2 driver based on user configuration
	I0906 16:55:49.726339    4712 start.go:298] selected driver: qemu2
	I0906 16:55:49.726344    4712 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:55:49.726349    4712 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:49.728369    4712 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0906 16:55:49.728390    4712 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0906 16:55:49.736353    4712 out.go:177] * Automatically selected the socket_vmnet network
	I0906 16:55:49.739493    4712 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 16:55:49.739519    4712 cni.go:84] Creating CNI manager for ""
	I0906 16:55:49.739530    4712 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:49.739534    4712 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 16:55:49.739539    4712 start_flags.go:321] config:
	{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:49.743526    4712 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:49.750365    4712 out.go:177] * Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	I0906 16:55:49.754345    4712 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:49.754363    4712 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:49.754376    4712 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:49.754455    4712 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:49.754467    4712 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:49.754525    4712 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/newest-cni-401000/config.json ...
	I0906 16:55:49.754538    4712 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/newest-cni-401000/config.json: {Name:mkb1a8a60043041e73033d3f830dca32b491e428 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:55:49.754736    4712 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:49.754764    4712 start.go:369] acquired machines lock for "newest-cni-401000" in 22.583µs
	I0906 16:55:49.754777    4712 start.go:93] Provisioning new machine with config: &{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:49.754808    4712 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:49.763344    4712 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:49.778585    4712 start.go:159] libmachine.API.Create for "newest-cni-401000" (driver="qemu2")
	I0906 16:55:49.778614    4712 client.go:168] LocalClient.Create starting
	I0906 16:55:49.778663    4712 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:49.778687    4712 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:49.778696    4712 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:49.778734    4712 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:49.778754    4712 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:49.778760    4712 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:49.779043    4712 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:49.897552    4712 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:49.966171    4712 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:49.966179    4712 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:49.966315    4712 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:55:49.974761    4712 main.go:141] libmachine: STDOUT: 
	I0906 16:55:49.974775    4712 main.go:141] libmachine: STDERR: 
	I0906 16:55:49.974818    4712 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2 +20000M
	I0906 16:55:49.981893    4712 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:49.981917    4712 main.go:141] libmachine: STDERR: 
	I0906 16:55:49.981940    4712 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:55:49.981947    4712 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:49.981997    4712 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e0:ec:78:cb:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:55:49.983557    4712 main.go:141] libmachine: STDOUT: 
	I0906 16:55:49.983572    4712 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:49.983595    4712 client.go:171] LocalClient.Create took 204.982792ms
	I0906 16:55:51.985676    4712 start.go:128] duration metric: createHost completed in 2.230944208s
	I0906 16:55:51.985737    4712 start.go:83] releasing machines lock for "newest-cni-401000", held for 2.231056458s
	W0906 16:55:51.985816    4712 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:51.995265    4712 out.go:177] * Deleting "newest-cni-401000" in qemu2 ...
	W0906 16:55:52.018866    4712 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:52.018894    4712 start.go:687] Will try again in 5 seconds ...
	I0906 16:55:57.020904    4712 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:57.021390    4712 start.go:369] acquired machines lock for "newest-cni-401000" in 393.458µs
	I0906 16:55:57.021586    4712 start.go:93] Provisioning new machine with config: &{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 16:55:57.021949    4712 start.go:125] createHost starting for "" (driver="qemu2")
	I0906 16:55:57.027633    4712 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 16:55:57.076756    4712 start.go:159] libmachine.API.Create for "newest-cni-401000" (driver="qemu2")
	I0906 16:55:57.076805    4712 client.go:168] LocalClient.Create starting
	I0906 16:55:57.076925    4712 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/ca.pem
	I0906 16:55:57.076988    4712 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:57.077005    4712 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:57.077071    4712 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17174-979/.minikube/certs/cert.pem
	I0906 16:55:57.077106    4712 main.go:141] libmachine: Decoding PEM data...
	I0906 16:55:57.077119    4712 main.go:141] libmachine: Parsing certificate...
	I0906 16:55:57.077627    4712 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17174-979/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0906 16:55:57.208554    4712 main.go:141] libmachine: Creating SSH key...
	I0906 16:55:57.403429    4712 main.go:141] libmachine: Creating Disk image...
	I0906 16:55:57.403436    4712 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0906 16:55:57.403605    4712 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:55:57.412603    4712 main.go:141] libmachine: STDOUT: 
	I0906 16:55:57.412619    4712 main.go:141] libmachine: STDERR: 
	I0906 16:55:57.412694    4712 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2 +20000M
	I0906 16:55:57.419924    4712 main.go:141] libmachine: STDOUT: Image resized.
	
	I0906 16:55:57.419937    4712 main.go:141] libmachine: STDERR: 
	I0906 16:55:57.419950    4712 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:55:57.419954    4712 main.go:141] libmachine: Starting QEMU VM...
	I0906 16:55:57.419999    4712 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f1:41:0c:71:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:55:57.421644    4712 main.go:141] libmachine: STDOUT: 
	I0906 16:55:57.421657    4712 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:57.421669    4712 client.go:171] LocalClient.Create took 344.872625ms
	I0906 16:55:59.423775    4712 start.go:128] duration metric: createHost completed in 2.401904208s
	I0906 16:55:59.423827    4712 start.go:83] releasing machines lock for "newest-cni-401000", held for 2.402510417s
	W0906 16:55:59.424185    4712 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:59.432718    4712 out.go:177] 
	W0906 16:55:59.435892    4712 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:59.435948    4712 out.go:239] * 
	* 
	W0906 16:55:59.438805    4712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:55:59.449817    4712 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (67.570458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-236000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (31.471542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-236000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-236000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-236000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.711708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-236000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-236000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (28.28575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-236000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-236000 "sudo crictl images -o json": exit status 89 (38.565917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-236000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-236000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-236000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (28.86375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-236000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-236000 --alsologtostderr -v=1: exit status 89 (39.69275ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-236000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:54.518865    4734 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:54.519000    4734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:54.519003    4734 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:54.519005    4734 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:54.519118    4734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:54.519325    4734 out.go:303] Setting JSON to false
	I0906 16:55:54.519333    4734 mustload.go:65] Loading cluster: default-k8s-diff-port-236000
	I0906 16:55:54.519497    4734 config.go:182] Loaded profile config "default-k8s-diff-port-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:54.522986    4734 out.go:177] * The control plane node must be running for this command
	I0906 16:55:54.527017    4734 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-236000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-236000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (28.560333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (28.744208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-236000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.178540833s)

                                                
                                                
-- stdout --
	* [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	* Restarting existing qemu2 VM for "newest-cni-401000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-401000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:55:59.770353    4771 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:55:59.770488    4771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:59.770491    4771 out.go:309] Setting ErrFile to fd 2...
	I0906 16:55:59.770493    4771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:55:59.770660    4771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:55:59.771629    4771 out.go:303] Setting JSON to false
	I0906 16:55:59.786625    4771 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1533,"bootTime":1694043026,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:55:59.786723    4771 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:55:59.791729    4771 out.go:177] * [newest-cni-401000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:55:59.798779    4771 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:55:59.798835    4771 notify.go:220] Checking for updates...
	I0906 16:55:59.802697    4771 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:55:59.806728    4771 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:55:59.809750    4771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:55:59.812720    4771 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:55:59.815697    4771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:55:59.818993    4771 config.go:182] Loaded profile config "newest-cni-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:55:59.819232    4771 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:55:59.823674    4771 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:55:59.830741    4771 start.go:298] selected driver: qemu2
	I0906 16:55:59.830747    4771 start.go:902] validating driver "qemu2" against &{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:59.830805    4771 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:55:59.832903    4771 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 16:55:59.832932    4771 cni.go:84] Creating CNI manager for ""
	I0906 16:55:59.832939    4771 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:55:59.832945    4771 start_flags.go:321] config:
	{Name:newest-cni-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-401000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:55:59.836983    4771 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:55:59.841692    4771 out.go:177] * Starting control plane node newest-cni-401000 in cluster newest-cni-401000
	I0906 16:55:59.847697    4771 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:55:59.847723    4771 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:55:59.847740    4771 cache.go:57] Caching tarball of preloaded images
	I0906 16:55:59.847811    4771 preload.go:174] Found /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 16:55:59.847823    4771 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:55:59.847882    4771 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/newest-cni-401000/config.json ...
	I0906 16:55:59.848166    4771 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:55:59.848192    4771 start.go:369] acquired machines lock for "newest-cni-401000" in 19.542µs
	I0906 16:55:59.848201    4771 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:55:59.848205    4771 fix.go:54] fixHost starting: 
	I0906 16:55:59.848322    4771 fix.go:102] recreateIfNeeded on newest-cni-401000: state=Stopped err=<nil>
	W0906 16:55:59.848330    4771 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:55:59.852747    4771 out.go:177] * Restarting existing qemu2 VM for "newest-cni-401000" ...
	I0906 16:55:59.860710    4771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f1:41:0c:71:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:55:59.862678    4771 main.go:141] libmachine: STDOUT: 
	I0906 16:55:59.862696    4771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:55:59.862722    4771 fix.go:56] fixHost completed within 14.516083ms
	I0906 16:55:59.862759    4771 start.go:83] releasing machines lock for "newest-cni-401000", held for 14.564416ms
	W0906 16:55:59.862766    4771 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:55:59.862800    4771 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:55:59.862804    4771 start.go:687] Will try again in 5 seconds ...
	I0906 16:56:04.864690    4771 start.go:365] acquiring machines lock for newest-cni-401000: {Name:mka76f6f627639febeb022f67da62b92b7e33bc1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 16:56:04.865214    4771 start.go:369] acquired machines lock for "newest-cni-401000" in 394.458µs
	I0906 16:56:04.865371    4771 start.go:96] Skipping create...Using existing machine configuration
	I0906 16:56:04.865396    4771 fix.go:54] fixHost starting: 
	I0906 16:56:04.866241    4771 fix.go:102] recreateIfNeeded on newest-cni-401000: state=Stopped err=<nil>
	W0906 16:56:04.866271    4771 fix.go:128] unexpected machine state, will restart: <nil>
	I0906 16:56:04.870640    4771 out.go:177] * Restarting existing qemu2 VM for "newest-cni-401000" ...
	I0906 16:56:04.877842    4771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f1:41:0c:71:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17174-979/.minikube/machines/newest-cni-401000/disk.qcow2
	I0906 16:56:04.886151    4771 main.go:141] libmachine: STDOUT: 
	I0906 16:56:04.886223    4771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0906 16:56:04.886306    4771 fix.go:56] fixHost completed within 20.9145ms
	I0906 16:56:04.886352    4771 start.go:83] releasing machines lock for "newest-cni-401000", held for 21.067458ms
	W0906 16:56:04.886586    4771 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-401000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0906 16:56:04.894626    4771 out.go:177] 
	W0906 16:56:04.898723    4771 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0906 16:56:04.898751    4771 out.go:239] * 
	* 
	W0906 16:56:04.901346    4771 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:56:04.909625    4771 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-401000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (67.631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-401000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-401000 "sudo crictl images -o json": exit status 89 (43.296417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-401000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-401000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-401000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (29.643375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-401000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-401000 --alsologtostderr -v=1: exit status 89 (39.939166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-401000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:56:05.091648    4785 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:56:05.091789    4785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:56:05.091792    4785 out.go:309] Setting ErrFile to fd 2...
	I0906 16:56:05.091794    4785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:56:05.091907    4785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:56:05.092124    4785 out.go:303] Setting JSON to false
	I0906 16:56:05.092133    4785 mustload.go:65] Loading cluster: newest-cni-401000
	I0906 16:56:05.092314    4785 config.go:182] Loaded profile config "newest-cni-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:56:05.095255    4785 out.go:177] * The control plane node must be running for this command
	I0906 16:56:05.099281    4785 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-401000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-401000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (29.079458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (29.335208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (137/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.1/json-events 16.1
11 TestDownloadOnly/v1.28.1/preload-exists 0
14 TestDownloadOnly/v1.28.1/kubectl 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.24
19 TestBinaryMirror 0.39
30 TestHyperKitDriverInstallOrUpdate 8.31
33 TestErrorSpam/setup 31.78
34 TestErrorSpam/start 0.34
35 TestErrorSpam/status 0.25
36 TestErrorSpam/pause 0.62
37 TestErrorSpam/unpause 0.62
38 TestErrorSpam/stop 3.24
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 45.43
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 35.68
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.06
49 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
50 TestFunctional/serial/CacheCmd/cache/add_local 1.2
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 0.93
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.41
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
58 TestFunctional/serial/ExtraConfig 37.71
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.66
61 TestFunctional/serial/LogsFileCmd 0.66
62 TestFunctional/serial/InvalidService 4.31
64 TestFunctional/parallel/ConfigCmd 0.2
65 TestFunctional/parallel/DashboardCmd 10.63
66 TestFunctional/parallel/DryRun 0.23
67 TestFunctional/parallel/InternationalLanguage 0.11
68 TestFunctional/parallel/StatusCmd 0.26
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 26.35
76 TestFunctional/parallel/SSHCmd 0.15
77 TestFunctional/parallel/CpCmd 0.3
79 TestFunctional/parallel/FileSync 0.08
80 TestFunctional/parallel/CertSync 0.45
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
88 TestFunctional/parallel/License 0.25
89 TestFunctional/parallel/Version/short 0.09
90 TestFunctional/parallel/Version/components 0.21
91 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
92 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
93 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
94 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
95 TestFunctional/parallel/ImageCommands/ImageBuild 1.74
96 TestFunctional/parallel/ImageCommands/Setup 1.47
97 TestFunctional/parallel/DockerEnv/bash 0.41
98 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
101 TestFunctional/parallel/ServiceCmd/DeployApp 11.11
102 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.32
103 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.62
104 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.53
105 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
106 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
107 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
108 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.69
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
114 TestFunctional/parallel/ServiceCmd/List 0.11
115 TestFunctional/parallel/ServiceCmd/JSONOutput 0.1
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
117 TestFunctional/parallel/ServiceCmd/Format 0.11
118 TestFunctional/parallel/ServiceCmd/URL 0.11
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
126 TestFunctional/parallel/ProfileCmd/profile_list 0.15
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.16
128 TestFunctional/parallel/MountCmd/any-port 5.34
129 TestFunctional/parallel/MountCmd/specific-port 0.78
130 TestFunctional/parallel/MountCmd/VerifyCleanup 0.75
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 28.36
138 TestImageBuild/serial/NormalBuild 1.05
140 TestImageBuild/serial/BuildWithDockerIgnore 0.12
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
144 TestIngressAddonLegacy/StartLegacyK8sCluster 68.18
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.86
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.26
151 TestJSONOutput/start/Command 45.38
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.29
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.25
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 12.07
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.33
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 69.13
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.15
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
262 TestStartStop/group/old-k8s-version/serial/Stop 0.06
263 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
267 TestStartStop/group/no-preload/serial/Stop 0.06
268 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
284 TestStartStop/group/embed-certs/serial/Stop 0.06
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
289 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.09
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-830000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-830000: exit status 85 (90.122209ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-830000 | jenkins | v1.31.2 | 06 Sep 23 16:36 PDT |          |
	|         | -p download-only-830000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 16:36:48
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 16:36:48.822815    1399 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:36:48.822968    1399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:36:48.822971    1399 out.go:309] Setting ErrFile to fd 2...
	I0906 16:36:48.822973    1399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:36:48.823084    1399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	W0906 16:36:48.823158    1399 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17174-979/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17174-979/.minikube/config/config.json: no such file or directory
	I0906 16:36:48.824254    1399 out.go:303] Setting JSON to true
	I0906 16:36:48.840743    1399 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":382,"bootTime":1694043026,"procs":412,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:36:48.840795    1399 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:36:48.846205    1399 out.go:97] [download-only-830000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:36:48.849256    1399 out.go:169] MINIKUBE_LOCATION=17174
	I0906 16:36:48.846365    1399 notify.go:220] Checking for updates...
	W0906 16:36:48.846355    1399 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 16:36:48.855148    1399 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:36:48.858180    1399 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:36:48.861209    1399 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:36:48.864128    1399 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	W0906 16:36:48.870232    1399 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 16:36:48.870517    1399 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:36:48.873328    1399 out.go:97] Using the qemu2 driver based on user configuration
	I0906 16:36:48.873333    1399 start.go:298] selected driver: qemu2
	I0906 16:36:48.873335    1399 start.go:902] validating driver "qemu2" against <nil>
	I0906 16:36:48.873388    1399 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0906 16:36:48.877191    1399 out.go:169] Automatically selected the socket_vmnet network
	I0906 16:36:48.882647    1399 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0906 16:36:48.882738    1399 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 16:36:48.882806    1399 cni.go:84] Creating CNI manager for ""
	I0906 16:36:48.882821    1399 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 16:36:48.882827    1399 start_flags.go:321] config:
	{Name:download-only-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:36:48.888416    1399 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:36:48.892225    1399 out.go:97] Downloading VM boot image ...
	I0906 16:36:48.892255    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0906 16:36:53.024730    1399 out.go:97] Starting control plane node download-only-830000 in cluster download-only-830000
	I0906 16:36:53.024750    1399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:36:53.080086    1399 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 16:36:53.080161    1399 cache.go:57] Caching tarball of preloaded images
	I0906 16:36:53.080326    1399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:36:53.085451    1399 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0906 16:36:53.085458    1399 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:36:53.167439    1399 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0906 16:37:01.626139    1399 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:37:01.626280    1399 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:37:02.268529    1399 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0906 16:37:02.268723    1399 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/download-only-830000/config.json ...
	I0906 16:37:02.268741    1399 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/download-only-830000/config.json: {Name:mk90bc48de1e752792895fdcb60d4de4be53d699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 16:37:02.268948    1399 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0906 16:37:02.269112    1399 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0906 16:37:02.694579    1399 out.go:169] 
	W0906 16:37:02.699494    1399 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68 0x106499f68] Decompressors:map[bz2:0x14000057de0 gz:0x14000057de8 tar:0x14000057d90 tar.bz2:0x14000057da0 tar.gz:0x14000057db0 tar.xz:0x14000057dc0 tar.zst:0x14000057dd0 tbz2:0x14000057da0 tgz:0x14000057db0 txz:0x14000057dc0 tzst:0x14000057dd0 xz:0x14000057df0 zip:0x14000057e00 zst:0x14000057df8] Getters:map[file:0x140003f45b0 http:0x14000b04140 https:0x14000b04190] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0906 16:37:02.699523    1399 out_reason.go:110] 
	W0906 16:37:02.706488    1399 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 16:37:02.710477    1399 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-830000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (16.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-830000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-830000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 : (16.100166459s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (16.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
--- PASS: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-830000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-830000: exit status 85 (71.445584ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-830000 | jenkins | v1.31.2 | 06 Sep 23 16:36 PDT |          |
	|         | -p download-only-830000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-830000 | jenkins | v1.31.2 | 06 Sep 23 16:37 PDT |          |
	|         | -p download-only-830000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/06 16:37:02
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 16:37:02.893668    1415 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:37:02.893769    1415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:37:02.893771    1415 out.go:309] Setting ErrFile to fd 2...
	I0906 16:37:02.893774    1415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:37:02.893884    1415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	W0906 16:37:02.893945    1415 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17174-979/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17174-979/.minikube/config/config.json: no such file or directory
	I0906 16:37:02.894854    1415 out.go:303] Setting JSON to true
	I0906 16:37:02.909792    1415 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":396,"bootTime":1694043026,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:37:02.909868    1415 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:37:02.914308    1415 out.go:97] [download-only-830000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:37:02.918313    1415 out.go:169] MINIKUBE_LOCATION=17174
	I0906 16:37:02.914399    1415 notify.go:220] Checking for updates...
	I0906 16:37:02.924324    1415 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:37:02.927354    1415 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:37:02.930381    1415 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:37:02.933328    1415 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	W0906 16:37:02.937696    1415 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 16:37:02.937966    1415 config.go:182] Loaded profile config "download-only-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0906 16:37:02.937995    1415 start.go:810] api.Load failed for download-only-830000: filestore "download-only-830000": Docker machine "download-only-830000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 16:37:02.938037    1415 driver.go:373] Setting default libvirt URI to qemu:///system
	W0906 16:37:02.938053    1415 start.go:810] api.Load failed for download-only-830000: filestore "download-only-830000": Docker machine "download-only-830000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0906 16:37:02.941281    1415 out.go:97] Using the qemu2 driver based on existing profile
	I0906 16:37:02.941293    1415 start.go:298] selected driver: qemu2
	I0906 16:37:02.941295    1415 start.go:902] validating driver "qemu2" against &{Name:download-only-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-830000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:37:02.943144    1415 cni.go:84] Creating CNI manager for ""
	I0906 16:37:02.943156    1415 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 16:37:02.943164    1415 start_flags.go:321] config:
	{Name:download-only-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-830000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:37:02.947111    1415 iso.go:125] acquiring lock: {Name:mk208580a41bd83b6238142fbb7ad0497ef8967b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 16:37:02.950330    1415 out.go:97] Starting control plane node download-only-830000 in cluster download-only-830000
	I0906 16:37:02.950338    1415 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:37:03.005423    1415 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:37:03.005450    1415 cache.go:57] Caching tarball of preloaded images
	I0906 16:37:03.005619    1415 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:37:03.010680    1415 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0906 16:37:03.010687    1415 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:37:03.095542    1415 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0906 16:37:17.003942    1415 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:37:17.004082    1415 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17174-979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0906 16:37:17.587105    1415 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0906 16:37:17.587181    1415 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/download-only-830000/config.json ...
	I0906 16:37:17.587443    1415 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0906 16:37:17.587601    1415 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17174-979/.minikube/cache/darwin/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-830000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-830000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.39s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-773000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-773000
--- PASS: TestBinaryMirror (0.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.31s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.31s)

                                                
                                    
x
+
TestErrorSpam/setup (31.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-946000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-946000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 --driver=qemu2 : (31.781827542s)
--- PASS: TestErrorSpam/setup (31.78s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 pause
--- PASS: TestErrorSpam/pause (0.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (3.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 stop: (3.069477375s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-946000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-946000 stop
--- PASS: TestErrorSpam/stop (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17174-979/.minikube/files/etc/test/nested/copy/1397/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-526000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-526000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.426866791s)
--- PASS: TestFunctional/serial/StartWithProxy (45.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-526000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-526000 --alsologtostderr -v=8: (35.6842595s)
functional_test.go:659: soft start took 35.684643291s for "functional-526000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-526000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-526000 cache add registry.k8s.io/pause:3.1: (1.245534125s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-526000 cache add registry.k8s.io/pause:3.3: (1.20164975s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-526000 cache add registry.k8s.io/pause:latest: (1.1114135s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2608720458/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cache add minikube-local-cache-test:functional-526000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cache delete minikube-local-cache-test:functional-526000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-526000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (75.439ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 kubectl -- --context functional-526000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-526000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-526000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-526000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.70596675s)
functional_test.go:757: restart took 37.706079708s for "functional-526000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-526000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2906987062/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-526000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-526000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-526000: exit status 115 (114.867542ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31128 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-526000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-526000 delete -f testdata/invalidsvc.yaml: (1.076805041s)
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 config get cpus: exit status 14 (28.799541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 config get cpus: exit status 14 (29.150792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-526000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-526000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2088: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-526000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-526000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (130.292042ms)

                                                
                                                
-- stdout --
	* [functional-526000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:41:40.265189    2071 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:41:40.265334    2071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:40.265338    2071 out.go:309] Setting ErrFile to fd 2...
	I0906 16:41:40.265340    2071 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:40.265458    2071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:41:40.266748    2071 out.go:303] Setting JSON to false
	I0906 16:41:40.286606    2071 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":674,"bootTime":1694043026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:41:40.286710    2071 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:41:40.291586    2071 out.go:177] * [functional-526000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	I0906 16:41:40.303596    2071 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:41:40.307508    2071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:41:40.303631    2071 notify.go:220] Checking for updates...
	I0906 16:41:40.314506    2071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:41:40.317618    2071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:41:40.320540    2071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:41:40.327458    2071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:41:40.330751    2071 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:41:40.330994    2071 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:41:40.335545    2071 out.go:177] * Using the qemu2 driver based on existing profile
	I0906 16:41:40.343426    2071 start.go:298] selected driver: qemu2
	I0906 16:41:40.343430    2071 start.go:902] validating driver "qemu2" against &{Name:functional-526000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-526000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:41:40.343474    2071 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:41:40.349420    2071 out.go:177] 
	W0906 16:41:40.353472    2071 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 16:41:40.357493    2071 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-526000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-526000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-526000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (106.9365ms)

                                                
                                                
-- stdout --
	* [functional-526000] minikube v1.31.2 sur Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 16:41:40.493089    2082 out.go:296] Setting OutFile to fd 1 ...
	I0906 16:41:40.493188    2082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:40.493191    2082 out.go:309] Setting ErrFile to fd 2...
	I0906 16:41:40.493194    2082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0906 16:41:40.493316    2082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
	I0906 16:41:40.494622    2082 out.go:303] Setting JSON to false
	I0906 16:41:40.511308    2082 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":674,"bootTime":1694043026,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.1","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0906 16:41:40.511395    2082 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0906 16:41:40.515558    2082 out.go:177] * [functional-526000] minikube v1.31.2 sur Darwin 13.5.1 (arm64)
	I0906 16:41:40.522487    2082 out.go:177]   - MINIKUBE_LOCATION=17174
	I0906 16:41:40.522539    2082 notify.go:220] Checking for updates...
	I0906 16:41:40.529372    2082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	I0906 16:41:40.532469    2082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0906 16:41:40.535476    2082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 16:41:40.536881    2082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	I0906 16:41:40.539485    2082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 16:41:40.542720    2082 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0906 16:41:40.542954    2082 driver.go:373] Setting default libvirt URI to qemu:///system
	I0906 16:41:40.546334    2082 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0906 16:41:40.553443    2082 start.go:298] selected driver: qemu2
	I0906 16:41:40.553448    2082 start.go:902] validating driver "qemu2" against &{Name:functional-526000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-526000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0906 16:41:40.553522    2082 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 16:41:40.559527    2082 out.go:177] 
	W0906 16:41:40.563469    2082 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 16:41:40.567537    2082 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9ed43e75-d9cb-4cd6-85b2-88f3614befd1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007095042s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-526000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-526000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-526000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-526000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d5ebc8c8-7121-4067-96cf-cc8680b5baaa] Pending
helpers_test.go:344: "sp-pod" [d5ebc8c8-7121-4067-96cf-cc8680b5baaa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d5ebc8c8-7121-4067-96cf-cc8680b5baaa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010692875s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-526000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-526000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-526000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ca7a62c3-55c5-4841-8cdc-9f4ed029bd15] Pending
helpers_test.go:344: "sp-pod" [ca7a62c3-55c5-4841-8cdc-9f4ed029bd15] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ca7a62c3-55c5-4841-8cdc-9f4ed029bd15] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.008072625s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-526000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh -n functional-526000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 cp functional-526000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd46326058/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh -n functional-526000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1397/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo cat /etc/test/nested/copy/1397/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1397.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo cat /etc/ssl/certs/1397.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1397.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo cat /usr/share/ca-certificates/1397.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo cat /etc/ssl/certs/13972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo cat /usr/share/ca-certificates/13972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-526000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 ssh "sudo systemctl is-active crio": exit status 1 (88.417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-526000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-526000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-526000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-526000 image ls --format short --alsologtostderr:
I0906 16:41:48.463971    2110 out.go:296] Setting OutFile to fd 1 ...
I0906 16:41:48.464697    2110 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:48.464702    2110 out.go:309] Setting ErrFile to fd 2...
I0906 16:41:48.464705    2110 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:48.464864    2110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
I0906 16:41:48.465668    2110 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:48.465753    2110 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:48.466617    2110 ssh_runner.go:195] Run: systemctl --version
I0906 16:41:48.466629    2110 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/id_rsa Username:docker}
I0906 16:41:48.503438    2110 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-526000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 812f5241df7fd | 68.3MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/localhost/my-image                | functional-526000 | 98eabf4d3a8b4 | 1.41MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.28.1           | b29fb62480892 | 119MB  |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b4a5a57e99492 | 57.8MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 8b6e1980b7584 | 116MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/google-containers/addon-resizer      | functional-526000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-526000 | de2bf0ba4f90b | 30B    |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| docker.io/library/nginx                     | latest            | ab73c7fd67234 | 192MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-526000 image ls --format table --alsologtostderr:
I0906 16:41:50.447629    2123 out.go:296] Setting OutFile to fd 1 ...
I0906 16:41:50.447785    2123 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:50.447788    2123 out.go:309] Setting ErrFile to fd 2...
I0906 16:41:50.447790    2123 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:50.447903    2123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
I0906 16:41:50.448281    2123 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:50.448340    2123 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:50.449138    2123 ssh_runner.go:195] Run: systemctl --version
I0906 16:41:50.449149    2123 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/id_rsa Username:docker}
I0906 16:41:50.483859    2123 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/09/06 16:41:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-526000 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"119000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"98eabf4d3a8b41328ae148217cdaaf16076c826927da909696ebf21387e7241
7","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-526000"],"size":"1410000"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-526000"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":[],"repoTags
":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"68300000"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"116000000"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"57800000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"de2bf0ba4f90b2f0cb3bb06a33b5feb97ce7997e2dd0b16c8e94dd388d5b9569","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-526000"],"size":"30"},{"id":"ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"19
2000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-526000 image ls --format json --alsologtostderr:
I0906 16:41:50.367694    2121 out.go:296] Setting OutFile to fd 1 ...
I0906 16:41:50.367850    2121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:50.367853    2121 out.go:309] Setting ErrFile to fd 2...
I0906 16:41:50.367855    2121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:50.367977    2121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
I0906 16:41:50.368346    2121 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:50.368402    2121 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:50.369165    2121 ssh_runner.go:195] Run: systemctl --version
I0906 16:41:50.369177    2121 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/id_rsa Username:docker}
I0906 16:41:50.406284    2121 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-526000 image ls --format yaml --alsologtostderr:
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "119000000"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "68300000"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "116000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-526000
size: "32900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "57800000"
- id: de2bf0ba4f90b2f0cb3bb06a33b5feb97ce7997e2dd0b16c8e94dd388d5b9569
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-526000
size: "30"
- id: ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-526000 image ls --format yaml --alsologtostderr:
I0906 16:41:48.548515    2112 out.go:296] Setting OutFile to fd 1 ...
I0906 16:41:48.548650    2112 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:48.548652    2112 out.go:309] Setting ErrFile to fd 2...
I0906 16:41:48.548655    2112 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:48.548774    2112 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
I0906 16:41:48.549194    2112 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:48.549253    2112 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:48.550017    2112 ssh_runner.go:195] Run: systemctl --version
I0906 16:41:48.550027    2112 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/id_rsa Username:docker}
I0906 16:41:48.586615    2112 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 ssh pgrep buildkitd: exit status 1 (69.952292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image build -t localhost/my-image:functional-526000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-526000 image build -t localhost/my-image:functional-526000 testdata/build --alsologtostderr: (1.589360208s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-526000 image build -t localhost/my-image:functional-526000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in df157cb75a39
Removing intermediate container df157cb75a39
---> f2ed56e8a54b
Step 3/3 : ADD content.txt /
---> 98eabf4d3a8b
Successfully built 98eabf4d3a8b
Successfully tagged localhost/my-image:functional-526000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-526000 image build -t localhost/my-image:functional-526000 testdata/build --alsologtostderr:
I0906 16:41:48.699578    2116 out.go:296] Setting OutFile to fd 1 ...
I0906 16:41:48.699769    2116 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:48.699773    2116 out.go:309] Setting ErrFile to fd 2...
I0906 16:41:48.699775    2116 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 16:41:48.699901    2116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17174-979/.minikube/bin
I0906 16:41:48.700296    2116 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:48.700671    2116 config.go:182] Loaded profile config "functional-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0906 16:41:48.701548    2116 ssh_runner.go:195] Run: systemctl --version
I0906 16:41:48.701556    2116 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/id_rsa Username:docker}
I0906 16:41:48.735848    2116 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.15509523.tar
I0906 16:41:48.735911    2116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 16:41:48.738848    2116 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.15509523.tar
I0906 16:41:48.740367    2116 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.15509523.tar: stat -c "%s %y" /var/lib/minikube/build/build.15509523.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.15509523.tar': No such file or directory
I0906 16:41:48.740383    2116 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.15509523.tar --> /var/lib/minikube/build/build.15509523.tar (3072 bytes)
I0906 16:41:48.748225    2116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.15509523
I0906 16:41:48.751420    2116 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.15509523 -xf /var/lib/minikube/build/build.15509523.tar
I0906 16:41:48.754259    2116 docker.go:339] Building image: /var/lib/minikube/build/build.15509523
I0906 16:41:48.754295    2116 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-526000 /var/lib/minikube/build/build.15509523
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0906 16:41:50.245851    2116 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-526000 /var/lib/minikube/build/build.15509523: (1.491572542s)
I0906 16:41:50.245910    2116 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.15509523
I0906 16:41:50.248964    2116 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.15509523.tar
I0906 16:41:50.251756    2116 build_images.go:207] Built localhost/my-image:functional-526000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.15509523.tar
I0906 16:41:50.251769    2116 build_images.go:123] succeeded building to: functional-526000
I0906 16:41:50.251772    2116 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.421311334s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-526000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-526000 docker-env) && out/minikube-darwin-arm64 status -p functional-526000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-526000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-526000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-526000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-vgzjl" [5a08dcb5-e101-4f1c-aee7-7a332e7b8f34] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-vgzjl" [5a08dcb5-e101-4f1c-aee7-7a332e7b8f34] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.017122208s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image load --daemon gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-526000 image load --daemon gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr: (2.239775416s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image load --daemon gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-526000 image load --daemon gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr: (1.541427042s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.444426s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-526000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image load --daemon gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-526000 image load --daemon gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr: (1.9710045s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image save gcr.io/google-containers/addon-resizer:functional-526000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image rm gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-526000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 image save --daemon gcr.io/google-containers/addon-resizer:functional-526000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-526000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-526000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3a76fd11-a87c-4ab1-ba4a-5af93cd15cd3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3a76fd11-a87c-4ab1-ba4a-5af93cd15cd3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.007591375s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 service list -o json
functional_test.go:1493: Took "96.069417ms" to run "out/minikube-darwin-arm64 -p functional-526000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:32366
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:32366
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-526000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.170.9 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-526000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "119.517709ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "33.603375ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "122.606167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "34.433916ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2172964743/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694043693363508000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2172964743/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694043693363508000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2172964743/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694043693363508000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2172964743/001/test-1694043693363508000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (67.110333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 23:41 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 23:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 23:41 test-1694043693363508000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh cat /mount-9p/test-1694043693363508000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-526000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [47890341-b8e1-40fd-8ee3-75d463d96181] Pending
helpers_test.go:344: "busybox-mount" [47890341-b8e1-40fd-8ee3-75d463d96181] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [47890341-b8e1-40fd-8ee3-75d463d96181] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [47890341-b8e1-40fd-8ee3-75d463d96181] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007391167s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-526000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2172964743/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port606528201/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.69675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port606528201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 ssh "sudo umount -f /mount-9p": exit status 1 (69.908709ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-526000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port606528201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T" /mount1: exit status 80 (72.342542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17174-979/.minikube/machines/functional-526000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_mount_c093f2922e4f8b08bdb15bba42b47baa34c0c215_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-526000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-526000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-526000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2147968616/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.75s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-526000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-526000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-526000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-320000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-320000 --driver=qemu2 : (28.356019917s)
--- PASS: TestImageBuild/serial/Setup (28.36s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-320000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-320000: (1.054795667s)
--- PASS: TestImageBuild/serial/NormalBuild (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-320000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-320000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (68.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-208000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-208000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m8.181932084s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (68.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 addons enable ingress --alsologtostderr -v=5: (14.856113083s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.86s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-208000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-501000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-501000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (45.378857709s)
--- PASS: TestJSONOutput/start/Command (45.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.29s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-501000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.29s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.25s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-501000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.25s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-501000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-501000 --output=json --user=testUser: (12.073986542s)
--- PASS: TestJSONOutput/stop/Command (12.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-098000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-098000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.895791ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23ced146-a29c-4577-af9c-8157c40336f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-098000] minikube v1.31.2 on Darwin 13.5.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c219e784-c573-404e-96b1-8383eff41500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17174"}}
	{"specversion":"1.0","id":"08089c4f-79e5-44c2-ad84-45c4d0ebf9b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig"}}
	{"specversion":"1.0","id":"7f7e5dfc-e12f-4c9e-beb3-8c57b7439dcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ecb0aa4e-1bb1-4f65-b804-2689ca5d84dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7749318-4bd4-4efb-a6ea-846d3c1a0c45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube"}}
	{"specversion":"1.0","id":"4957368b-1ab4-4a67-9960-062f52c6c8cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e788113-083c-4760-b855-f19bd94fa83a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-098000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-098000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (69.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-302000 --driver=qemu2 
E0906 16:45:54.691172    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:54.697878    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:54.709944    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:54.732022    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:54.774091    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:54.856177    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:55.018366    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:55.339977    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:55.982169    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:57.264584    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:45:59.824804    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:46:04.946866    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-302000 --driver=qemu2 : (30.014853583s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-303000 --driver=qemu2 
E0906 16:46:15.188887    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
E0906 16:46:35.670612    1397 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17174-979/.minikube/profiles/functional-526000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-303000 --driver=qemu2 : (38.307891083s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-302000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-303000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-303000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-303000
helpers_test.go:175: Cleaning up "first-302000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-302000
--- PASS: TestMinikubeProfile (69.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-447000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.758334ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-447000] minikube v1.31.2 on Darwin 13.5.1 (arm64)
	  - MINIKUBE_LOCATION=17174
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17174-979/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17174-979/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.770458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-447000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-447000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.27675ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-447000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-782000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (28.628125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-782000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-147000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-147000 -n no-preload-147000: exit status 7 (28.448416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-147000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-571000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-571000 -n embed-certs-571000: exit status 7 (28.253167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-571000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-236000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-236000 -n default-k8s-diff-port-236000: exit status 7 (30.342292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-236000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-401000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-401000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-401000 -n newest-cni-401000: exit status 7 (29.559459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-401000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-967000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-967000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-967000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-967000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-967000"

                                                
                                                
----------------------- debugLogs end: cilium-967000 [took: 2.134901042s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-967000
--- SKIP: TestNetworkPlugins/group/cilium (2.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-686000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-686000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard