Test Report: QEMU_macOS 17207

                    
                      16e5f2f2154e76f2b54b730ee30f906631f73fdb:2023-09-10:30951
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.47
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.93
22 TestAddons/Setup 44.13
23 TestCertOptions 10.12
24 TestCertExpiration 195.15
25 TestDockerFlags 10.16
26 TestForceSystemdFlag 12.08
27 TestForceSystemdEnv 9.97
72 TestFunctional/parallel/ServiceCmdConnect 28.26
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.17
139 TestImageBuild/serial/BuildWithBuildArg 1.06
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 57.77
183 TestMountStart/serial/StartWithMountFirst 10.39
186 TestMultiNode/serial/FreshStart2Nodes 9.9
187 TestMultiNode/serial/DeployApp2Nodes 83.68
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.1
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.14
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.37
195 TestMultiNode/serial/DeleteNode 0.1
196 TestMultiNode/serial/StopMultiNode 0.15
197 TestMultiNode/serial/RestartMultiNode 5.25
198 TestMultiNode/serial/ValidateNameConflict 19.92
202 TestPreload 9.92
204 TestScheduledStopUnix 9.88
205 TestSkaffold 11.85
208 TestRunningBinaryUpgrade 161.18
210 TestKubernetesUpgrade 15.26
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.42
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.03
225 TestStoppedBinaryUpgrade/Setup 145.49
227 TestPause/serial/Start 9.95
237 TestNoKubernetes/serial/StartWithK8s 9.77
238 TestNoKubernetes/serial/StartWithStopK8s 5.32
239 TestNoKubernetes/serial/Start 5.31
243 TestNoKubernetes/serial/StartNoArgs 5.31
245 TestNetworkPlugins/group/auto/Start 9.8
246 TestNetworkPlugins/group/kindnet/Start 9.77
247 TestNetworkPlugins/group/calico/Start 9.7
248 TestNetworkPlugins/group/custom-flannel/Start 9.86
249 TestNetworkPlugins/group/false/Start 9.75
250 TestNetworkPlugins/group/enable-default-cni/Start 9.7
251 TestNetworkPlugins/group/flannel/Start 9.83
252 TestNetworkPlugins/group/bridge/Start 9.66
253 TestNetworkPlugins/group/kubenet/Start 9.75
254 TestStoppedBinaryUpgrade/Upgrade 2.25
256 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
257 TestStoppedBinaryUpgrade/MinikubeLogs 0.08
259 TestStartStop/group/no-preload/serial/FirstStart 11.64
260 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
261 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
264 TestStartStop/group/old-k8s-version/serial/SecondStart 6.97
265 TestStartStop/group/no-preload/serial/DeployApp 0.09
266 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
269 TestStartStop/group/no-preload/serial/SecondStart 5.2
270 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
271 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
272 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
273 TestStartStop/group/old-k8s-version/serial/Pause 0.1
274 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
275 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
276 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
278 TestStartStop/group/embed-certs/serial/FirstStart 9.97
279 TestStartStop/group/no-preload/serial/Pause 0.12
281 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.48
282 TestStartStop/group/embed-certs/serial/DeployApp 0.1
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/embed-certs/serial/SecondStart 6.95
287 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
291 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.21
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
294 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
295 TestStartStop/group/embed-certs/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
300 TestStartStop/group/newest-cni/serial/FirstStart 9.79
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.14
306 TestStartStop/group/newest-cni/serial/SecondStart 5.25
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (18.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.472060791s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a27add3c-2a4c-49c5-802c-83e23f88acdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-556000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4565483c-adf9-4427-9082-3f4ed8b89646","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17207"}}
	{"specversion":"1.0","id":"0064c9b6-4952-4249-85af-72d54942ef31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig"}}
	{"specversion":"1.0","id":"f2618511-95c3-4a74-8cda-27f8e4c208df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"03273f79-6a26-4dc1-ad9e-17a4a8738149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5c93f175-ba97-4d2a-9169-06b96c725592","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube"}}
	{"specversion":"1.0","id":"76409075-9d84-408b-bb0e-6b09e599b74b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"89848bf7-78a2-4184-b750-1934ef291306","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"88985be2-7fd1-4f1b-a603-c1c71697fdfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"65f7f228-67cb-4043-af78-f7262416485d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3df29953-c71f-4ab5-a890-5d0c8ec41a31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-556000 in cluster download-only-556000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"58467747-4025-4577-a600-cf1aa1d0c9e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3f8fe1d-9e42-4b7b-a672-07defa4866b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68] Decompressors:map[bz2:0x140004c1a78 gz:0x140004c1ad0 tar:0x140004c1a80 tar.bz2:0x140004c1a90 tar.gz:0x140004c1aa0 tar.xz:0x140004c1ab0 tar.zst:0x140004c1ac0 tbz2:0x140004c1a90 tgz:0x140004
c1aa0 txz:0x140004c1ab0 tzst:0x140004c1ac0 xz:0x140004c1ad8 zip:0x140004c1ae0 zst:0x140004c1af0] Getters:map[file:0x14000be6ff0 http:0x14000778140 https:0x14000778190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"0ea0a846-eadc-4f3b-bb59-369ee7e1dc8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 13:52:07.346540    2202 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:52:07.346669    2202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:07.346672    2202 out.go:309] Setting ErrFile to fd 2...
	I0910 13:52:07.346675    2202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:07.346787    2202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	W0910 13:52:07.346856    2202 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17207-1093/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17207-1093/.minikube/config/config.json: no such file or directory
	I0910 13:52:07.347963    2202 out.go:303] Setting JSON to true
	I0910 13:52:07.364181    2202 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1302,"bootTime":1694377825,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:52:07.364247    2202 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:52:07.370979    2202 out.go:97] [download-only-556000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:52:07.373814    2202 out.go:169] MINIKUBE_LOCATION=17207
	W0910 13:52:07.371131    2202 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 13:52:07.371151    2202 notify.go:220] Checking for updates...
	I0910 13:52:07.380713    2202 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:52:07.383862    2202 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:52:07.386917    2202 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:52:07.389906    2202 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	W0910 13:52:07.395844    2202 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 13:52:07.396017    2202 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:52:07.401946    2202 out.go:97] Using the qemu2 driver based on user configuration
	I0910 13:52:07.401952    2202 start.go:298] selected driver: qemu2
	I0910 13:52:07.401954    2202 start.go:902] validating driver "qemu2" against <nil>
	I0910 13:52:07.402006    2202 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 13:52:07.405829    2202 out.go:169] Automatically selected the socket_vmnet network
	I0910 13:52:07.412223    2202 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0910 13:52:07.412303    2202 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 13:52:07.412368    2202 cni.go:84] Creating CNI manager for ""
	I0910 13:52:07.412384    2202 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 13:52:07.412392    2202 start_flags.go:321] config:
	{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-556000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:52:07.417886    2202 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 13:52:07.420959    2202 out.go:97] Downloading VM boot image ...
	I0910 13:52:07.420976    2202 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso
	I0910 13:52:16.283106    2202 out.go:97] Starting control plane node download-only-556000 in cluster download-only-556000
	I0910 13:52:16.283131    2202 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 13:52:16.346946    2202 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 13:52:16.347033    2202 cache.go:57] Caching tarball of preloaded images
	I0910 13:52:16.347188    2202 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 13:52:16.352220    2202 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0910 13:52:16.352227    2202 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:16.431413    2202 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 13:52:24.779793    2202 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:24.779912    2202 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:25.419300    2202 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0910 13:52:25.419487    2202 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/download-only-556000/config.json ...
	I0910 13:52:25.419505    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/download-only-556000/config.json: {Name:mk4987b801c215dfc32a717379894ca3551848b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:52:25.419724    2202 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 13:52:25.419891    2202 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0910 13:52:25.752870    2202 out.go:169] 
	W0910 13:52:25.756815    2202 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68] Decompressors:map[bz2:0x140004c1a78 gz:0x140004c1ad0 tar:0x140004c1a80 tar.bz2:0x140004c1a90 tar.gz:0x140004c1aa0 tar.xz:0x140004c1ab0 tar.zst:0x140004c1ac0 tbz2:0x140004c1a90 tgz:0x140004c1aa0 txz:0x140004c1ab0 tzst:0x140004c1ac0 xz:0x140004c1ad8 zip:0x140004c1ae0 zst:0x140004c1af0] Getters:map[file:0x14000be6ff0 http:0x14000778140 https:0x14000778190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0910 13:52:25.756843    2202 out_reason.go:110] 
	W0910 13:52:25.763891    2202 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 13:52:25.767786    2202 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-556000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (18.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-638000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-638000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.795179916s)

                                                
                                                
-- stdout --
	* [offline-docker-638000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-638000 in cluster offline-docker-638000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-638000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:05:00.748989    3761 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:05:00.749114    3761 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:00.749117    3761 out.go:309] Setting ErrFile to fd 2...
	I0910 14:05:00.749119    3761 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:00.749233    3761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:05:00.750236    3761 out.go:303] Setting JSON to false
	I0910 14:05:00.766602    3761 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2075,"bootTime":1694377825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:05:00.766678    3761 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:05:00.771770    3761 out.go:177] * [offline-docker-638000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:05:00.779788    3761 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:05:00.779818    3761 notify.go:220] Checking for updates...
	I0910 14:05:00.786685    3761 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:05:00.789668    3761 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:05:00.792717    3761 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:05:00.801686    3761 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:05:00.804687    3761 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:05:00.807963    3761 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:05:00.808009    3761 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:05:00.811629    3761 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:05:00.818667    3761 start.go:298] selected driver: qemu2
	I0910 14:05:00.818680    3761 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:05:00.818688    3761 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:05:00.821705    3761 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:05:00.824670    3761 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:05:00.827680    3761 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:05:00.827707    3761 cni.go:84] Creating CNI manager for ""
	I0910 14:05:00.827714    3761 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:05:00.827718    3761 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:05:00.827724    3761 start_flags.go:321] config:
	{Name:offline-docker-638000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-638000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:05:00.831823    3761 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:05:00.839707    3761 out.go:177] * Starting control plane node offline-docker-638000 in cluster offline-docker-638000
	I0910 14:05:00.843638    3761 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:05:00.843670    3761 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:05:00.843685    3761 cache.go:57] Caching tarball of preloaded images
	I0910 14:05:00.843761    3761 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:05:00.843765    3761 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:05:00.843837    3761 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/offline-docker-638000/config.json ...
	I0910 14:05:00.843849    3761 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/offline-docker-638000/config.json: {Name:mk44990b5eb612ba5f69ba6cc50264241a5c1d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:05:00.847968    3761 start.go:365] acquiring machines lock for offline-docker-638000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:00.848018    3761 start.go:369] acquired machines lock for "offline-docker-638000" in 36.75µs
	I0910 14:05:00.848033    3761 start.go:93] Provisioning new machine with config: &{Name:offline-docker-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-638000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:00.848071    3761 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:00.855690    3761 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:00.869924    3761 start.go:159] libmachine.API.Create for "offline-docker-638000" (driver="qemu2")
	I0910 14:05:00.869943    3761 client.go:168] LocalClient.Create starting
	I0910 14:05:00.870014    3761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:00.870041    3761 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:00.870054    3761 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:00.870108    3761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:00.870127    3761 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:00.870137    3761 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:00.870486    3761 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:01.003240    3761 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:01.131099    3761 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:01.131109    3761 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:01.131328    3761 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2
	I0910 14:05:01.153153    3761 main.go:141] libmachine: STDOUT: 
	I0910 14:05:01.153175    3761 main.go:141] libmachine: STDERR: 
	I0910 14:05:01.153250    3761 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2 +20000M
	I0910 14:05:01.161179    3761 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:01.161197    3761 main.go:141] libmachine: STDERR: 
	I0910 14:05:01.161232    3761 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2
	I0910 14:05:01.161239    3761 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:01.161273    3761 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:b2:29:c0:42:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2
	I0910 14:05:01.163008    3761 main.go:141] libmachine: STDOUT: 
	I0910 14:05:01.163022    3761 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:01.163040    3761 client.go:171] LocalClient.Create took 293.091458ms
	I0910 14:05:03.163814    3761 start.go:128] duration metric: createHost completed in 2.315735459s
	I0910 14:05:03.163844    3761 start.go:83] releasing machines lock for "offline-docker-638000", held for 2.315825417s
	W0910 14:05:03.163865    3761 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:03.172862    3761 out.go:177] * Deleting "offline-docker-638000" in qemu2 ...
	W0910 14:05:03.184907    3761 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:03.184918    3761 start.go:687] Will try again in 5 seconds ...
	I0910 14:05:08.187159    3761 start.go:365] acquiring machines lock for offline-docker-638000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:08.187548    3761 start.go:369] acquired machines lock for "offline-docker-638000" in 299.5µs
	I0910 14:05:08.187665    3761 start.go:93] Provisioning new machine with config: &{Name:offline-docker-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-638000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:08.188060    3761 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:08.197702    3761 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:08.243209    3761 start.go:159] libmachine.API.Create for "offline-docker-638000" (driver="qemu2")
	I0910 14:05:08.243251    3761 client.go:168] LocalClient.Create starting
	I0910 14:05:08.243413    3761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:08.243481    3761 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:08.243500    3761 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:08.243564    3761 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:08.243601    3761 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:08.243616    3761 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:08.244046    3761 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:08.370930    3761 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:08.464724    3761 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:08.464731    3761 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:08.464873    3761 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2
	I0910 14:05:08.473612    3761 main.go:141] libmachine: STDOUT: 
	I0910 14:05:08.473624    3761 main.go:141] libmachine: STDERR: 
	I0910 14:05:08.473688    3761 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2 +20000M
	I0910 14:05:08.480819    3761 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:08.480831    3761 main.go:141] libmachine: STDERR: 
	I0910 14:05:08.480847    3761 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2
	I0910 14:05:08.480853    3761 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:08.480895    3761 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:87:c7:43:ed:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/offline-docker-638000/disk.qcow2
	I0910 14:05:08.482375    3761 main.go:141] libmachine: STDOUT: 
	I0910 14:05:08.482388    3761 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:08.482409    3761 client.go:171] LocalClient.Create took 239.153ms
	I0910 14:05:10.484472    3761 start.go:128] duration metric: createHost completed in 2.296402916s
	I0910 14:05:10.484511    3761 start.go:83] releasing machines lock for "offline-docker-638000", held for 2.296950458s
	W0910 14:05:10.484617    3761 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-638000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-638000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:10.492933    3761 out.go:177] 
	W0910 14:05:10.496901    3761 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:05:10.496906    3761 out.go:239] * 
	* 
	W0910 14:05:10.497383    3761 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:05:10.507874    3761 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-638000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-09-10 14:05:10.516426 -0700 PDT m=+783.200543835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-638000 -n offline-docker-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-638000 -n offline-docker-638000: exit status 7 (31.846917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-638000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-638000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-638000
--- FAIL: TestOffline (9.93s)

                                                
                                    
x
+
TestAddons/Setup (44.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-899000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-899000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (44.125840125s)

                                                
                                                
-- stdout --
	* [addons-899000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-899000 in cluster addons-899000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	
	  - Using image docker.io/registry:2.8.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 13:52:43.430264    2268 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:52:43.430384    2268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:43.430387    2268 out.go:309] Setting ErrFile to fd 2...
	I0910 13:52:43.430389    2268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:43.430507    2268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 13:52:43.431483    2268 out.go:303] Setting JSON to false
	I0910 13:52:43.446584    2268 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1338,"bootTime":1694377825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:52:43.446661    2268 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:52:43.452135    2268 out.go:177] * [addons-899000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:52:43.459148    2268 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 13:52:43.459214    2268 notify.go:220] Checking for updates...
	I0910 13:52:43.465080    2268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:52:43.468152    2268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:52:43.469641    2268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:52:43.472178    2268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 13:52:43.475120    2268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 13:52:43.478328    2268 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:52:43.482071    2268 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 13:52:43.489104    2268 start.go:298] selected driver: qemu2
	I0910 13:52:43.489110    2268 start.go:902] validating driver "qemu2" against <nil>
	I0910 13:52:43.489116    2268 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 13:52:43.490997    2268 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 13:52:43.499021    2268 out.go:177] * Automatically selected the socket_vmnet network
	I0910 13:52:43.502211    2268 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 13:52:43.502245    2268 cni.go:84] Creating CNI manager for ""
	I0910 13:52:43.502252    2268 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:52:43.502256    2268 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 13:52:43.502261    2268 start_flags.go:321] config:
	{Name:addons-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0910 13:52:43.506265    2268 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 13:52:43.514080    2268 out.go:177] * Starting control plane node addons-899000 in cluster addons-899000
	I0910 13:52:43.518107    2268 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:52:43.518125    2268 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 13:52:43.518141    2268 cache.go:57] Caching tarball of preloaded images
	I0910 13:52:43.518217    2268 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 13:52:43.518223    2268 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 13:52:43.518836    2268 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/config.json ...
	I0910 13:52:43.518854    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/config.json: {Name:mk6e5986a6ac8444bdf3b0b1f4e86f664918a857 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:52:43.519076    2268 start.go:365] acquiring machines lock for addons-899000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 13:52:43.519163    2268 start.go:369] acquired machines lock for "addons-899000" in 72.917µs
	I0910 13:52:43.519184    2268 start.go:93] Provisioning new machine with config: &{Name:addons-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 13:52:43.519359    2268 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 13:52:43.526172    2268 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 13:52:43.877470    2268 start.go:159] libmachine.API.Create for "addons-899000" (driver="qemu2")
	I0910 13:52:43.877495    2268 client.go:168] LocalClient.Create starting
	I0910 13:52:43.877673    2268 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 13:52:43.983297    2268 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 13:52:44.078641    2268 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 13:52:44.497079    2268 main.go:141] libmachine: Creating SSH key...
	I0910 13:52:44.682051    2268 main.go:141] libmachine: Creating Disk image...
	I0910 13:52:44.682063    2268 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 13:52:44.682261    2268 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/disk.qcow2
	I0910 13:52:44.716210    2268 main.go:141] libmachine: STDOUT: 
	I0910 13:52:44.716239    2268 main.go:141] libmachine: STDERR: 
	I0910 13:52:44.716315    2268 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/disk.qcow2 +20000M
	I0910 13:52:44.723570    2268 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 13:52:44.723582    2268 main.go:141] libmachine: STDERR: 
	I0910 13:52:44.723596    2268 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/disk.qcow2
	I0910 13:52:44.723603    2268 main.go:141] libmachine: Starting QEMU VM...
	I0910 13:52:44.723645    2268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:00:11:b3:e0:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/disk.qcow2
	I0910 13:52:44.792218    2268 main.go:141] libmachine: STDOUT: 
	I0910 13:52:44.792295    2268 main.go:141] libmachine: STDERR: 
	I0910 13:52:44.792303    2268 main.go:141] libmachine: Attempt 0
	I0910 13:52:44.792321    2268 main.go:141] libmachine: Searching for 46:0:11:b3:e0:52 in /var/db/dhcpd_leases ...
	I0910 13:52:46.794496    2268 main.go:141] libmachine: Attempt 1
	I0910 13:52:46.794583    2268 main.go:141] libmachine: Searching for 46:0:11:b3:e0:52 in /var/db/dhcpd_leases ...
	I0910 13:52:48.796835    2268 main.go:141] libmachine: Attempt 2
	I0910 13:52:48.796866    2268 main.go:141] libmachine: Searching for 46:0:11:b3:e0:52 in /var/db/dhcpd_leases ...
	I0910 13:52:50.798948    2268 main.go:141] libmachine: Attempt 3
	I0910 13:52:50.798970    2268 main.go:141] libmachine: Searching for 46:0:11:b3:e0:52 in /var/db/dhcpd_leases ...
	I0910 13:52:52.800997    2268 main.go:141] libmachine: Attempt 4
	I0910 13:52:52.801009    2268 main.go:141] libmachine: Searching for 46:0:11:b3:e0:52 in /var/db/dhcpd_leases ...
	I0910 13:52:54.803137    2268 main.go:141] libmachine: Attempt 5
	I0910 13:52:54.803185    2268 main.go:141] libmachine: Searching for 46:0:11:b3:e0:52 in /var/db/dhcpd_leases ...
	I0910 13:52:56.805254    2268 main.go:141] libmachine: Attempt 6
	I0910 13:52:56.805298    2268 main.go:141] libmachine: Searching for 46:0:11:b3:e0:52 in /var/db/dhcpd_leases ...
	I0910 13:52:56.805468    2268 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0910 13:52:56.805501    2268 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:52:56.805517    2268 main.go:141] libmachine: Found match: 46:0:11:b3:e0:52
	I0910 13:52:56.805525    2268 main.go:141] libmachine: IP: 192.168.105.2
	I0910 13:52:56.805531    2268 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0910 13:52:58.826012    2268 machine.go:88] provisioning docker machine ...
	I0910 13:52:58.826078    2268 buildroot.go:166] provisioning hostname "addons-899000"
	I0910 13:52:58.826650    2268 main.go:141] libmachine: Using SSH client type: native
	I0910 13:52:58.827400    2268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d763b0] 0x102d78e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 13:52:58.827418    2268 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-899000 && echo "addons-899000" | sudo tee /etc/hostname
	I0910 13:52:58.916205    2268 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-899000
	
	I0910 13:52:58.916328    2268 main.go:141] libmachine: Using SSH client type: native
	I0910 13:52:58.916844    2268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d763b0] 0x102d78e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 13:52:58.916860    2268 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-899000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-899000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-899000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 13:52:58.983174    2268 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 13:52:58.983192    2268 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17207-1093/.minikube CaCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17207-1093/.minikube}
	I0910 13:52:58.983204    2268 buildroot.go:174] setting up certificates
	I0910 13:52:58.983213    2268 provision.go:83] configureAuth start
	I0910 13:52:58.983218    2268 provision.go:138] copyHostCerts
	I0910 13:52:58.983381    2268 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem (1123 bytes)
	I0910 13:52:58.983739    2268 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem (1675 bytes)
	I0910 13:52:58.983890    2268 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem (1078 bytes)
	I0910 13:52:58.984012    2268 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem org=jenkins.addons-899000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-899000]
	I0910 13:52:59.143854    2268 provision.go:172] copyRemoteCerts
	I0910 13:52:59.143914    2268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 13:52:59.143923    2268 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/id_rsa Username:docker}
	I0910 13:52:59.174662    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 13:52:59.181712    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0910 13:52:59.188340    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 13:52:59.195043    2268 provision.go:86] duration metric: configureAuth took 211.813833ms
	I0910 13:52:59.195055    2268 buildroot.go:189] setting minikube options for container-runtime
	I0910 13:52:59.195152    2268 config.go:182] Loaded profile config "addons-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:52:59.195192    2268 main.go:141] libmachine: Using SSH client type: native
	I0910 13:52:59.195412    2268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d763b0] 0x102d78e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 13:52:59.195416    2268 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 13:52:59.251043    2268 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 13:52:59.251068    2268 buildroot.go:70] root file system type: tmpfs
	I0910 13:52:59.251132    2268 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 13:52:59.251186    2268 main.go:141] libmachine: Using SSH client type: native
	I0910 13:52:59.251411    2268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d763b0] 0x102d78e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 13:52:59.251447    2268 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 13:52:59.309014    2268 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 13:52:59.309053    2268 main.go:141] libmachine: Using SSH client type: native
	I0910 13:52:59.309274    2268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d763b0] 0x102d78e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 13:52:59.309283    2268 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 13:52:59.668192    2268 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 13:52:59.668205    2268 machine.go:91] provisioned docker machine in 842.182167ms
	I0910 13:52:59.668211    2268 client.go:171] LocalClient.Create took 15.790880083s
	I0910 13:52:59.668228    2268 start.go:167] duration metric: libmachine.API.Create for "addons-899000" took 15.790930875s
	I0910 13:52:59.668234    2268 start.go:300] post-start starting for "addons-899000" (driver="qemu2")
	I0910 13:52:59.668239    2268 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 13:52:59.668308    2268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 13:52:59.668319    2268 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/id_rsa Username:docker}
	I0910 13:52:59.695601    2268 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 13:52:59.696979    2268 info.go:137] Remote host: Buildroot 2021.02.12
	I0910 13:52:59.696988    2268 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/addons for local assets ...
	I0910 13:52:59.697063    2268 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/files for local assets ...
	I0910 13:52:59.697090    2268 start.go:303] post-start completed in 28.852959ms
	I0910 13:52:59.697464    2268 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/config.json ...
	I0910 13:52:59.697619    2268 start.go:128] duration metric: createHost completed in 16.178424708s
	I0910 13:52:59.697643    2268 main.go:141] libmachine: Using SSH client type: native
	I0910 13:52:59.697865    2268 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d763b0] 0x102d78e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0910 13:52:59.697870    2268 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0910 13:52:59.754118    2268 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694379179.565334169
	
	I0910 13:52:59.754125    2268 fix.go:206] guest clock: 1694379179.565334169
	I0910 13:52:59.754129    2268 fix.go:219] Guest: 2023-09-10 13:52:59.565334169 -0700 PDT Remote: 2023-09-10 13:52:59.697623 -0700 PDT m=+16.286043834 (delta=-132.288831ms)
	I0910 13:52:59.754145    2268 fix.go:190] guest clock delta is within tolerance: -132.288831ms
	I0910 13:52:59.754148    2268 start.go:83] releasing machines lock for "addons-899000", held for 16.235146791s
	I0910 13:52:59.754425    2268 ssh_runner.go:195] Run: cat /version.json
	I0910 13:52:59.754434    2268 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/id_rsa Username:docker}
	I0910 13:52:59.754425    2268 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 13:52:59.754510    2268 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/id_rsa Username:docker}
	I0910 13:52:59.784254    2268 ssh_runner.go:195] Run: systemctl --version
	I0910 13:52:59.869675    2268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 13:52:59.871704    2268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 13:52:59.871737    2268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 13:52:59.876976    2268 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 13:52:59.876983    2268 start.go:466] detecting cgroup driver to use...
	I0910 13:52:59.877111    2268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:52:59.882548    2268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0910 13:52:59.886222    2268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 13:52:59.889626    2268 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 13:52:59.889659    2268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 13:52:59.892884    2268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:52:59.895956    2268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 13:52:59.898991    2268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:52:59.902477    2268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 13:52:59.906022    2268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 13:52:59.909453    2268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 13:52:59.912194    2268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 13:52:59.914977    2268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:52:59.998711    2268 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 13:53:00.005237    2268 start.go:466] detecting cgroup driver to use...
	I0910 13:53:00.005316    2268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 13:53:00.013905    2268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:53:00.018963    2268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 13:53:00.025127    2268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:53:00.029685    2268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 13:53:00.034270    2268 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 13:53:00.072781    2268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 13:53:00.077695    2268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:53:00.082985    2268 ssh_runner.go:195] Run: which cri-dockerd
	I0910 13:53:00.084466    2268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 13:53:00.087082    2268 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0910 13:53:00.092372    2268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 13:53:00.164764    2268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 13:53:00.241329    2268 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 13:53:00.241344    2268 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0910 13:53:00.246679    2268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:53:00.332572    2268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 13:53:01.489864    2268 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157289708s)
	I0910 13:53:01.489924    2268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 13:53:01.566943    2268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 13:53:01.652215    2268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 13:53:01.734355    2268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:53:01.818365    2268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 13:53:01.826437    2268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:53:01.914133    2268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0910 13:53:01.937660    2268 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 13:53:01.937753    2268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 13:53:01.939651    2268 start.go:534] Will wait 60s for crictl version
	I0910 13:53:01.939698    2268 ssh_runner.go:195] Run: which crictl
	I0910 13:53:01.940987    2268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 13:53:01.954952    2268 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0910 13:53:01.955020    2268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:53:01.972480    2268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:53:01.988035    2268 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0910 13:53:01.988175    2268 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0910 13:53:01.989661    2268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 13:53:01.993897    2268 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:53:01.993935    2268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:53:01.999113    2268 docker.go:636] Got preloaded images: 
	I0910 13:53:01.999122    2268 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0910 13:53:01.999173    2268 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 13:53:02.002367    2268 ssh_runner.go:195] Run: which lz4
	I0910 13:53:02.003807    2268 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0910 13:53:02.005140    2268 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 13:53:02.005152    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0910 13:53:03.330855    2268 docker.go:600] Took 1.327106 seconds to copy over tarball
	I0910 13:53:03.330908    2268 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 13:53:04.361932    2268 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.0310165s)
	I0910 13:53:04.361948    2268 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 13:53:04.378082    2268 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 13:53:04.381561    2268 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0910 13:53:04.386796    2268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:53:04.465258    2268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 13:53:06.768416    2268 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.303167042s)
	I0910 13:53:06.768508    2268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:53:06.774897    2268 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 13:53:06.774907    2268 cache_images.go:84] Images are preloaded, skipping loading
	I0910 13:53:06.774967    2268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 13:53:06.782818    2268 cni.go:84] Creating CNI manager for ""
	I0910 13:53:06.782827    2268 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:53:06.782847    2268 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0910 13:53:06.782875    2268 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-899000 NodeName:addons-899000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 13:53:06.782940    2268 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-899000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 13:53:06.782977    2268 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-899000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0910 13:53:06.783032    2268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0910 13:53:06.786489    2268 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 13:53:06.786515    2268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 13:53:06.789695    2268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0910 13:53:06.795069    2268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 13:53:06.800167    2268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0910 13:53:06.805157    2268 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0910 13:53:06.806531    2268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 13:53:06.810422    2268 certs.go:56] Setting up /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000 for IP: 192.168.105.2
	I0910 13:53:06.810432    2268 certs.go:190] acquiring lock for shared ca certs: {Name:mk28134b321cd562735798fd2fcb10a58019fa5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:06.810592    2268 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key
	I0910 13:53:06.853961    2268 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt ...
	I0910 13:53:06.853970    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt: {Name:mkc17bbcb1b3b6a2ce03370c9c10ce5ad419f161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:06.854160    2268 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key ...
	I0910 13:53:06.854163    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key: {Name:mkec40528a0efaa0e72d9ff28858891bbb97229b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:06.854285    2268 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key
	I0910 13:53:07.005544    2268 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.crt ...
	I0910 13:53:07.005551    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.crt: {Name:mk57dd3b9138ff0e6161484aadc5c48b2d1a2182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.005772    2268 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key ...
	I0910 13:53:07.005775    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key: {Name:mk92349f3a46bb120933a6146562d2f192802d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.005902    2268 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/client.key
	I0910 13:53:07.005921    2268 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/client.crt with IP's: []
	I0910 13:53:07.087577    2268 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/client.crt ...
	I0910 13:53:07.087581    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/client.crt: {Name:mk867c773431ad8843cca85ee5d97bdcec8a6d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.087726    2268 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/client.key ...
	I0910 13:53:07.087728    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/client.key: {Name:mk0d7f50f5f1a1be5b7891a9495d1b6b9f10fadf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.087830    2268 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.key.96055969
	I0910 13:53:07.087842    2268 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0910 13:53:07.198919    2268 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.crt.96055969 ...
	I0910 13:53:07.198922    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.crt.96055969: {Name:mka69c8bf9aaaeb50f56f6c747b6a13540f743d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.199076    2268 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.key.96055969 ...
	I0910 13:53:07.199078    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.key.96055969: {Name:mkcf29cba35b4b26bf0f821a4288c857963e1b76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.199184    2268 certs.go:337] copying /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.crt
	I0910 13:53:07.199394    2268 certs.go:341] copying /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.key
	I0910 13:53:07.199523    2268 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.key
	I0910 13:53:07.199535    2268 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.crt with IP's: []
	I0910 13:53:07.289623    2268 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.crt ...
	I0910 13:53:07.289627    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.crt: {Name:mkf8d1c472e469f12b884decfab060ddcf8a25c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.289789    2268 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.key ...
	I0910 13:53:07.289792    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.key: {Name:mkbedce4e588f63916ad6cc76aebbeb55217bcf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:07.290073    2268 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 13:53:07.290101    2268 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem (1078 bytes)
	I0910 13:53:07.290124    2268 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem (1123 bytes)
	I0910 13:53:07.290148    2268 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem (1675 bytes)
	I0910 13:53:07.290517    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0910 13:53:07.298800    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0910 13:53:07.306132    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 13:53:07.313209    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/addons-899000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 13:53:07.320035    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 13:53:07.326681    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 13:53:07.333891    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 13:53:07.341048    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 13:53:07.347749    2268 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 13:53:07.354334    2268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 13:53:07.360128    2268 ssh_runner.go:195] Run: openssl version
	I0910 13:53:07.361961    2268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 13:53:07.365048    2268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:53:07.366477    2268 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 10 20:53 /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:53:07.366499    2268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:53:07.368340    2268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 13:53:07.371056    2268 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0910 13:53:07.372487    2268 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0910 13:53:07.372522    2268 kubeadm.go:404] StartCluster: {Name:addons-899000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:53:07.372587    2268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 13:53:07.377969    2268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 13:53:07.381235    2268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 13:53:07.383857    2268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 13:53:07.386587    2268 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 13:53:07.386601    2268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 13:53:07.407460    2268 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0910 13:53:07.407506    2268 kubeadm.go:322] [preflight] Running pre-flight checks
	I0910 13:53:07.462122    2268 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 13:53:07.462191    2268 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 13:53:07.462241    2268 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 13:53:07.520217    2268 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 13:53:07.529354    2268 out.go:204]   - Generating certificates and keys ...
	I0910 13:53:07.529397    2268 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0910 13:53:07.529431    2268 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0910 13:53:07.558645    2268 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 13:53:07.644279    2268 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0910 13:53:07.694529    2268 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0910 13:53:07.728457    2268 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0910 13:53:07.758788    2268 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0910 13:53:07.758862    2268 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-899000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0910 13:53:07.866669    2268 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0910 13:53:07.866742    2268 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-899000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0910 13:53:07.924916    2268 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 13:53:08.079944    2268 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 13:53:08.161245    2268 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0910 13:53:08.161274    2268 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 13:53:08.324889    2268 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 13:53:08.500636    2268 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 13:53:08.616899    2268 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 13:53:08.706838    2268 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 13:53:08.707053    2268 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 13:53:08.708042    2268 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 13:53:08.711427    2268 out.go:204]   - Booting up control plane ...
	I0910 13:53:08.711480    2268 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 13:53:08.711519    2268 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 13:53:08.711550    2268 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 13:53:08.715540    2268 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 13:53:08.715867    2268 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 13:53:08.715926    2268 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0910 13:53:08.809751    2268 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 13:53:12.809801    2268 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001508 seconds
	I0910 13:53:12.809860    2268 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 13:53:12.815460    2268 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 13:53:13.324305    2268 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 13:53:13.324397    2268 kubeadm.go:322] [mark-control-plane] Marking the node addons-899000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 13:53:13.829266    2268 kubeadm.go:322] [bootstrap-token] Using token: 76hp0d.h7ua1zwhooyyu8gl
	I0910 13:53:13.833603    2268 out.go:204]   - Configuring RBAC rules ...
	I0910 13:53:13.833653    2268 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 13:53:13.834482    2268 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 13:53:13.841594    2268 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 13:53:13.842830    2268 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 13:53:13.844474    2268 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 13:53:13.845639    2268 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 13:53:13.849778    2268 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 13:53:14.020152    2268 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0910 13:53:14.236782    2268 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0910 13:53:14.237309    2268 kubeadm.go:322] 
	I0910 13:53:14.237339    2268 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0910 13:53:14.237345    2268 kubeadm.go:322] 
	I0910 13:53:14.237382    2268 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0910 13:53:14.237385    2268 kubeadm.go:322] 
	I0910 13:53:14.237397    2268 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0910 13:53:14.237424    2268 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 13:53:14.237447    2268 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 13:53:14.237452    2268 kubeadm.go:322] 
	I0910 13:53:14.237478    2268 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0910 13:53:14.237482    2268 kubeadm.go:322] 
	I0910 13:53:14.237513    2268 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 13:53:14.237518    2268 kubeadm.go:322] 
	I0910 13:53:14.237541    2268 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0910 13:53:14.237579    2268 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 13:53:14.237609    2268 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 13:53:14.237612    2268 kubeadm.go:322] 
	I0910 13:53:14.237657    2268 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 13:53:14.237700    2268 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0910 13:53:14.237703    2268 kubeadm.go:322] 
	I0910 13:53:14.237741    2268 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 76hp0d.h7ua1zwhooyyu8gl \
	I0910 13:53:14.237808    2268 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:10bd6c29805637182224d42f91d4bace622161cd91f1a9b0f464f3aed87a5ead \
	I0910 13:53:14.237820    2268 kubeadm.go:322] 	--control-plane 
	I0910 13:53:14.237822    2268 kubeadm.go:322] 
	I0910 13:53:14.237863    2268 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0910 13:53:14.237868    2268 kubeadm.go:322] 
	I0910 13:53:14.237906    2268 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 76hp0d.h7ua1zwhooyyu8gl \
	I0910 13:53:14.237966    2268 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:10bd6c29805637182224d42f91d4bace622161cd91f1a9b0f464f3aed87a5ead 
	I0910 13:53:14.238024    2268 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 13:53:14.238032    2268 cni.go:84] Creating CNI manager for ""
	I0910 13:53:14.238040    2268 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:53:14.245402    2268 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 13:53:14.248499    2268 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 13:53:14.251581    2268 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0910 13:53:14.256186    2268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 13:53:14.256247    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:14.256248    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d731e1cec1979d094cdaebcdf1ed599ff8209767 minikube.k8s.io/name=addons-899000 minikube.k8s.io/updated_at=2023_09_10T13_53_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:14.320046    2268 ops.go:34] apiserver oom_adj: -16
	I0910 13:53:14.320067    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:14.350365    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:14.885512    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:15.385486    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:15.885488    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:16.385484    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:16.885515    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:17.385539    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:17.884514    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:18.385504    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:18.885475    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:19.385430    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:19.885174    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:20.385447    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:20.885462    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:21.385445    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:21.885466    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:22.385474    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:22.885463    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:23.385414    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:23.885404    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:24.385411    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:24.885433    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:25.385307    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:25.885410    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:26.384159    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:26.885358    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:27.385391    2268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:53:27.450160    2268 kubeadm.go:1081] duration metric: took 13.194085375s to wait for elevateKubeSystemPrivileges.
	I0910 13:53:27.450177    2268 kubeadm.go:406] StartCluster complete in 20.077869417s
	I0910 13:53:27.450188    2268 settings.go:142] acquiring lock: {Name:mk5069f344fe5f68592bc6867db9aede10bc3fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:27.450353    2268 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:53:27.450644    2268 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/kubeconfig: {Name:mk7c70008fc2d1b0ba569659f9157708891e79a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:53:27.450868    2268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 13:53:27.450956    2268 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0910 13:53:27.451049    2268 addons.go:69] Setting volumesnapshots=true in profile "addons-899000"
	I0910 13:53:27.451053    2268 addons.go:69] Setting ingress=true in profile "addons-899000"
	I0910 13:53:27.451056    2268 addons.go:231] Setting addon volumesnapshots=true in "addons-899000"
	I0910 13:53:27.451061    2268 addons.go:231] Setting addon ingress=true in "addons-899000"
	I0910 13:53:27.451098    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451100    2268 addons.go:69] Setting registry=true in profile "addons-899000"
	I0910 13:53:27.451104    2268 addons.go:231] Setting addon registry=true in "addons-899000"
	I0910 13:53:27.451104    2268 addons.go:69] Setting inspektor-gadget=true in profile "addons-899000"
	I0910 13:53:27.451116    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451121    2268 addons.go:69] Setting storage-provisioner=true in profile "addons-899000"
	I0910 13:53:27.451156    2268 addons.go:231] Setting addon storage-provisioner=true in "addons-899000"
	I0910 13:53:27.451098    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451114    2268 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-899000"
	I0910 13:53:27.451190    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451103    2268 addons.go:69] Setting ingress-dns=true in profile "addons-899000"
	I0910 13:53:27.451234    2268 addons.go:231] Setting addon ingress-dns=true in "addons-899000"
	I0910 13:53:27.451246    2268 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-899000"
	I0910 13:53:27.451282    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451304    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451095    2268 addons.go:69] Setting metrics-server=true in profile "addons-899000"
	I0910 13:53:27.451414    2268 addons.go:231] Setting addon metrics-server=true in "addons-899000"
	W0910 13:53:27.451455    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.451464    2268 addons.go:277] "addons-899000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0910 13:53:27.451022    2268 config.go:182] Loaded profile config "addons-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:53:27.451505    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451132    2268 addons.go:69] Setting cloud-spanner=true in profile "addons-899000"
	I0910 13:53:27.451543    2268 addons.go:231] Setting addon cloud-spanner=true in "addons-899000"
	I0910 13:53:27.451578    2268 host.go:66] Checking if "addons-899000" exists ...
	W0910 13:53:27.451621    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.451627    2268 addons.go:277] "addons-899000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	W0910 13:53:27.451621    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.451632    2268 addons.go:277] "addons-899000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0910 13:53:27.451138    2268 addons.go:69] Setting default-storageclass=true in profile "addons-899000"
	I0910 13:53:27.451134    2268 addons.go:231] Setting addon inspektor-gadget=true in "addons-899000"
	I0910 13:53:27.451649    2268 host.go:66] Checking if "addons-899000" exists ...
	I0910 13:53:27.451137    2268 addons.go:69] Setting gcp-auth=true in profile "addons-899000"
	I0910 13:53:27.451715    2268 mustload.go:65] Loading cluster: addons-899000
	I0910 13:53:27.451639    2268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-899000"
	W0910 13:53:27.451849    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.451853    2268 addons.go:277] "addons-899000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0910 13:53:27.451872    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.451878    2268 addons.go:277] "addons-899000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0910 13:53:27.451921    2268 config.go:182] Loaded profile config "addons-899000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	W0910 13:53:27.451938    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.451943    2268 addons_storage_classes.go:55] "addons-899000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0910 13:53:27.451947    2268 addons.go:231] Setting addon default-storageclass=true in "addons-899000"
	I0910 13:53:27.451954    2268 host.go:66] Checking if "addons-899000" exists ...
	W0910 13:53:27.451964    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.451972    2268 addons.go:277] "addons-899000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0910 13:53:27.451974    2268 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-899000"
	I0910 13:53:27.456268    2268 out.go:177] * Verifying csi-hostpath-driver addon...
	W0910 13:53:27.452114    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.452177    2268 host.go:54] host status for "addons-899000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	W0910 13:53:27.463319    2268 addons.go:277] "addons-899000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0910 13:53:27.463360    2268 addons.go:277] "addons-899000" is not running, setting default-storageclass=true and skipping enablement (err=<nil>)
	I0910 13:53:27.463768    2268 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0910 13:53:27.467136    2268 out.go:177] 
	I0910 13:53:27.470275    2268 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0910 13:53:27.473250    2268 addons.go:467] Verifying addon metrics-server=true in "addons-899000"
	I0910 13:53:27.473254    2268 out.go:177]   - Using image docker.io/registry:2.8.1
	I0910 13:53:27.480204    2268 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0910 13:53:27.473906    2268 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-899000" context rescaled to 1 replicas
	I0910 13:53:27.482071    2268 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0910 13:53:27.483378    2268 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	W0910 13:53:27.488275    2268 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/monitor: connect: connection refused
	I0910 13:53:27.491321    2268 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0910 13:53:27.494283    2268 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	W0910 13:53:27.494318    2268 out.go:239] * 
	* 
	I0910 13:53:27.504251    2268 out.go:177] * Verifying Kubernetes components...
	W0910 13:53:27.504692    2268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 13:53:27.511278    2268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 13:53:27.515352    2268 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0910 13:53:27.521356    2268 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0910 13:53:27.525272    2268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0910 13:53:27.525283    2268 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/addons-899000/id_rsa Username:docker}
	I0910 13:53:27.525340    2268 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-899000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (44.13s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-415000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-415000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.844691667s)

                                                
                                                
-- stdout --
	* [cert-options-415000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-415000 in cluster cert-options-415000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-415000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-415000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-415000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-415000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-415000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (81.405958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-415000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-415000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-415000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-415000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-415000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (38.776125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-415000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-415000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-415000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-10 14:05:40.771085 -0700 PDT m=+813.455267751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-415000 -n cert-options-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-415000 -n cert-options-415000: exit status 7 (29.873375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-415000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-415000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-415000
--- FAIL: TestCertOptions (10.12s)
E0910 14:06:12.901996    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:06:40.612626    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:06:58.868374    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-225000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-225000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.752508708s)

                                                
                                                
-- stdout --
	* [cert-expiration-225000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-225000 in cluster cert-expiration-225000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-225000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
E0910 14:05:36.945926    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-225000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-225000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.225365916s)

                                                
                                                
-- stdout --
	* [cert-expiration-225000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-225000 in cluster cert-expiration-225000
	* Restarting existing qemu2 VM for "cert-expiration-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-225000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-225000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-225000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-225000 in cluster cert-expiration-225000
	* Restarting existing qemu2 VM for "cert-expiration-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-225000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-10 14:08:40.692429 -0700 PDT m=+993.376998543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-225000 -n cert-expiration-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-225000 -n cert-expiration-225000: exit status 7 (69.928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-225000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-225000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-225000
--- FAIL: TestCertExpiration (195.15s)

                                                
                                    
x
+
TestDockerFlags (10.16s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-755000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-755000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.910554958s)

                                                
                                                
-- stdout --
	* [docker-flags-755000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-755000 in cluster docker-flags-755000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-755000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:05:20.639532    3959 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:05:20.639647    3959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:20.639652    3959 out.go:309] Setting ErrFile to fd 2...
	I0910 14:05:20.639654    3959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:20.639758    3959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:05:20.640772    3959 out.go:303] Setting JSON to false
	I0910 14:05:20.656327    3959 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2095,"bootTime":1694377825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:05:20.656396    3959 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:05:20.660981    3959 out.go:177] * [docker-flags-755000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:05:20.668746    3959 notify.go:220] Checking for updates...
	I0910 14:05:20.672899    3959 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:05:20.675959    3959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:05:20.677258    3959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:05:20.679879    3959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:05:20.682902    3959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:05:20.685959    3959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:05:20.689265    3959 config.go:182] Loaded profile config "force-systemd-flag-624000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:05:20.689329    3959 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:05:20.689369    3959 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:05:20.693848    3959 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:05:20.700901    3959 start.go:298] selected driver: qemu2
	I0910 14:05:20.700917    3959 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:05:20.700925    3959 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:05:20.702880    3959 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:05:20.705853    3959 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:05:20.709004    3959 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0910 14:05:20.709035    3959 cni.go:84] Creating CNI manager for ""
	I0910 14:05:20.709044    3959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:05:20.709048    3959 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:05:20.709054    3959 start_flags.go:321] config:
	{Name:docker-flags-755000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-755000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:05:20.713263    3959 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:05:20.719933    3959 out.go:177] * Starting control plane node docker-flags-755000 in cluster docker-flags-755000
	I0910 14:05:20.722899    3959 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:05:20.722918    3959 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:05:20.722935    3959 cache.go:57] Caching tarball of preloaded images
	I0910 14:05:20.722988    3959 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:05:20.722993    3959 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:05:20.723070    3959 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/docker-flags-755000/config.json ...
	I0910 14:05:20.723087    3959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/docker-flags-755000/config.json: {Name:mke2bf6bbe4785773e2b347bd412eefb86f2a957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:05:20.723288    3959 start.go:365] acquiring machines lock for docker-flags-755000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:20.723323    3959 start.go:369] acquired machines lock for "docker-flags-755000" in 24.292µs
	I0910 14:05:20.723335    3959 start.go:93] Provisioning new machine with config: &{Name:docker-flags-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-755000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:20.723370    3959 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:20.731874    3959 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:20.747963    3959 start.go:159] libmachine.API.Create for "docker-flags-755000" (driver="qemu2")
	I0910 14:05:20.747988    3959 client.go:168] LocalClient.Create starting
	I0910 14:05:20.748045    3959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:20.748079    3959 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:20.748091    3959 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:20.748133    3959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:20.748151    3959 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:20.748161    3959 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:20.748503    3959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:20.902468    3959 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:21.066528    3959 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:21.066534    3959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:21.066726    3959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2
	I0910 14:05:21.075603    3959 main.go:141] libmachine: STDOUT: 
	I0910 14:05:21.075620    3959 main.go:141] libmachine: STDERR: 
	I0910 14:05:21.075676    3959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2 +20000M
	I0910 14:05:21.082809    3959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:21.082829    3959 main.go:141] libmachine: STDERR: 
	I0910 14:05:21.082857    3959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2
	I0910 14:05:21.082863    3959 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:21.082901    3959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:0c:57:e3:41:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2
	I0910 14:05:21.084434    3959 main.go:141] libmachine: STDOUT: 
	I0910 14:05:21.084447    3959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:21.084464    3959 client.go:171] LocalClient.Create took 336.471291ms
	I0910 14:05:23.086689    3959 start.go:128] duration metric: createHost completed in 2.363303917s
	I0910 14:05:23.086745    3959 start.go:83] releasing machines lock for "docker-flags-755000", held for 2.363417292s
	W0910 14:05:23.086841    3959 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:23.100905    3959 out.go:177] * Deleting "docker-flags-755000" in qemu2 ...
	W0910 14:05:23.117128    3959 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:23.117146    3959 start.go:687] Will try again in 5 seconds ...
	I0910 14:05:28.119319    3959 start.go:365] acquiring machines lock for docker-flags-755000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:28.144281    3959 start.go:369] acquired machines lock for "docker-flags-755000" in 24.795792ms
	I0910 14:05:28.144435    3959 start.go:93] Provisioning new machine with config: &{Name:docker-flags-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-755000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:28.144885    3959 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:28.153488    3959 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:28.198522    3959 start.go:159] libmachine.API.Create for "docker-flags-755000" (driver="qemu2")
	I0910 14:05:28.198574    3959 client.go:168] LocalClient.Create starting
	I0910 14:05:28.198712    3959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:28.198766    3959 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:28.198781    3959 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:28.198846    3959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:28.198883    3959 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:28.198896    3959 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:28.199388    3959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:28.331490    3959 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:28.461153    3959 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:28.461158    3959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:28.461912    3959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2
	I0910 14:05:28.471205    3959 main.go:141] libmachine: STDOUT: 
	I0910 14:05:28.471222    3959 main.go:141] libmachine: STDERR: 
	I0910 14:05:28.471270    3959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2 +20000M
	I0910 14:05:28.478432    3959 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:28.478443    3959 main.go:141] libmachine: STDERR: 
	I0910 14:05:28.478462    3959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2
	I0910 14:05:28.478471    3959 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:28.478519    3959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8f:3c:fb:6c:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/docker-flags-755000/disk.qcow2
	I0910 14:05:28.480067    3959 main.go:141] libmachine: STDOUT: 
	I0910 14:05:28.480085    3959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:28.480099    3959 client.go:171] LocalClient.Create took 281.520708ms
	I0910 14:05:30.482286    3959 start.go:128] duration metric: createHost completed in 2.337377875s
	I0910 14:05:30.482377    3959 start.go:83] releasing machines lock for "docker-flags-755000", held for 2.338066375s
	W0910 14:05:30.482856    3959 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-755000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-755000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:30.493647    3959 out.go:177] 
	W0910 14:05:30.498597    3959 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:05:30.498624    3959 out.go:239] * 
	* 
	W0910 14:05:30.501178    3959 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:05:30.509578    3959 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-755000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-755000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-755000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (79.022334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-755000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-755000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-755000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-755000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-755000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-755000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (44.343333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-755000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-755000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-755000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-755000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-09-10 14:05:30.649301 -0700 PDT m=+803.333461960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-755000 -n docker-flags-755000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-755000 -n docker-flags-755000: exit status 7 (29.175833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-755000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-755000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-755000
--- FAIL: TestDockerFlags (10.16s)

                                                
                                    
x
+
TestForceSystemdFlag (12.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-624000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-624000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.875320084s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-624000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-624000 in cluster force-systemd-flag-624000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-624000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:05:13.656935    3937 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:05:13.657051    3937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:13.657054    3937 out.go:309] Setting ErrFile to fd 2...
	I0910 14:05:13.657056    3937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:13.657165    3937 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:05:13.658166    3937 out.go:303] Setting JSON to false
	I0910 14:05:13.672899    3937 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2088,"bootTime":1694377825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:05:13.672970    3937 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:05:13.681836    3937 out.go:177] * [force-systemd-flag-624000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:05:13.684888    3937 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:05:13.684940    3937 notify.go:220] Checking for updates...
	I0910 14:05:13.687858    3937 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:05:13.691842    3937 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:05:13.693167    3937 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:05:13.696834    3937 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:05:13.699860    3937 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:05:13.701445    3937 config.go:182] Loaded profile config "force-systemd-env-593000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:05:13.701525    3937 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:05:13.701562    3937 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:05:13.705831    3937 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:05:13.712655    3937 start.go:298] selected driver: qemu2
	I0910 14:05:13.712660    3937 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:05:13.712665    3937 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:05:13.714537    3937 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:05:13.717854    3937 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:05:13.720932    3937 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 14:05:13.720950    3937 cni.go:84] Creating CNI manager for ""
	I0910 14:05:13.720957    3937 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:05:13.720963    3937 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:05:13.720968    3937 start_flags.go:321] config:
	{Name:force-systemd-flag-624000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-624000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:05:13.725312    3937 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:05:13.732863    3937 out.go:177] * Starting control plane node force-systemd-flag-624000 in cluster force-systemd-flag-624000
	I0910 14:05:13.736849    3937 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:05:13.736868    3937 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:05:13.736884    3937 cache.go:57] Caching tarball of preloaded images
	I0910 14:05:13.736945    3937 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:05:13.736951    3937 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:05:13.737022    3937 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/force-systemd-flag-624000/config.json ...
	I0910 14:05:13.737039    3937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/force-systemd-flag-624000/config.json: {Name:mkf72195813828a67d057d6a6822de0963686a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:05:13.737260    3937 start.go:365] acquiring machines lock for force-systemd-flag-624000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:13.737292    3937 start.go:369] acquired machines lock for "force-systemd-flag-624000" in 23.5µs
	I0910 14:05:13.737303    3937 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-624000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:13.737335    3937 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:13.745835    3937 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:13.761880    3937 start.go:159] libmachine.API.Create for "force-systemd-flag-624000" (driver="qemu2")
	I0910 14:05:13.761911    3937 client.go:168] LocalClient.Create starting
	I0910 14:05:13.761973    3937 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:13.762006    3937 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:13.762022    3937 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:13.762058    3937 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:13.762077    3937 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:13.762088    3937 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:13.762423    3937 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:13.879761    3937 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:13.979434    3937 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:13.979439    3937 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:13.979596    3937 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2
	I0910 14:05:13.988142    3937 main.go:141] libmachine: STDOUT: 
	I0910 14:05:13.988156    3937 main.go:141] libmachine: STDERR: 
	I0910 14:05:13.988215    3937 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2 +20000M
	I0910 14:05:13.995271    3937 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:13.995285    3937 main.go:141] libmachine: STDERR: 
	I0910 14:05:13.995299    3937 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2
	I0910 14:05:13.995312    3937 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:13.995346    3937 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:5a:7d:11:f2:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2
	I0910 14:05:13.996910    3937 main.go:141] libmachine: STDOUT: 
	I0910 14:05:13.996924    3937 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:13.996942    3937 client.go:171] LocalClient.Create took 235.024042ms
	I0910 14:05:15.999124    3937 start.go:128] duration metric: createHost completed in 2.261773834s
	I0910 14:05:15.999228    3937 start.go:83] releasing machines lock for "force-systemd-flag-624000", held for 2.261931291s
	W0910 14:05:15.999288    3937 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:16.008622    3937 out.go:177] * Deleting "force-systemd-flag-624000" in qemu2 ...
	W0910 14:05:16.029495    3937 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:16.029529    3937 start.go:687] Will try again in 5 seconds ...
	I0910 14:05:21.031614    3937 start.go:365] acquiring machines lock for force-systemd-flag-624000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:23.086879    3937 start.go:369] acquired machines lock for "force-systemd-flag-624000" in 2.055228583s
	I0910 14:05:23.087051    3937 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-624000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-624000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:23.087367    3937 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:23.093014    3937 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:23.138377    3937 start.go:159] libmachine.API.Create for "force-systemd-flag-624000" (driver="qemu2")
	I0910 14:05:23.138412    3937 client.go:168] LocalClient.Create starting
	I0910 14:05:23.138547    3937 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:23.138619    3937 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:23.138639    3937 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:23.138708    3937 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:23.138753    3937 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:23.138767    3937 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:23.139242    3937 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:23.265898    3937 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:23.446276    3937 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:23.446283    3937 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:23.446442    3937 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2
	I0910 14:05:23.454924    3937 main.go:141] libmachine: STDOUT: 
	I0910 14:05:23.454942    3937 main.go:141] libmachine: STDERR: 
	I0910 14:05:23.455000    3937 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2 +20000M
	I0910 14:05:23.462015    3937 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:23.462029    3937 main.go:141] libmachine: STDERR: 
	I0910 14:05:23.462042    3937 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2
	I0910 14:05:23.462050    3937 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:23.462090    3937 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6f:49:ce:bb:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-flag-624000/disk.qcow2
	I0910 14:05:23.463470    3937 main.go:141] libmachine: STDOUT: 
	I0910 14:05:23.463486    3937 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:23.463497    3937 client.go:171] LocalClient.Create took 325.078459ms
	I0910 14:05:25.465675    3937 start.go:128] duration metric: createHost completed in 2.378274334s
	I0910 14:05:25.465790    3937 start.go:83] releasing machines lock for "force-systemd-flag-624000", held for 2.378852458s
	W0910 14:05:25.466203    3937 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-624000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-624000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:25.477435    3937 out.go:177] 
	W0910 14:05:25.481471    3937 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:05:25.481508    3937 out.go:239] * 
	* 
	W0910 14:05:25.484496    3937 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:05:25.493359    3937 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-624000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-624000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-624000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.064167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-624000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-624000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-10 14:05:25.58566 -0700 PDT m=+798.269810460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-624000 -n force-systemd-flag-624000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-624000 -n force-systemd-flag-624000: exit status 7 (32.259666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-624000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-624000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-624000
--- FAIL: TestForceSystemdFlag (12.08s)

                                                
                                    
x
+
TestForceSystemdEnv (9.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-593000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-593000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.756520417s)

                                                
                                                
-- stdout --
	* [force-systemd-env-593000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-593000 in cluster force-systemd-env-593000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-593000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:05:10.679700    3915 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:05:10.679879    3915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:10.679882    3915 out.go:309] Setting ErrFile to fd 2...
	I0910 14:05:10.679885    3915 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:05:10.680013    3915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:05:10.681563    3915 out.go:303] Setting JSON to false
	I0910 14:05:10.701067    3915 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2085,"bootTime":1694377825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:05:10.701139    3915 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:05:10.705760    3915 out.go:177] * [force-systemd-env-593000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:05:10.713936    3915 notify.go:220] Checking for updates...
	I0910 14:05:10.716750    3915 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:05:10.719913    3915 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:05:10.722906    3915 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:05:10.725833    3915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:05:10.728873    3915 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:05:10.731918    3915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0910 14:05:10.733707    3915 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:05:10.733749    3915 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:05:10.737800    3915 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:05:10.744744    3915 start.go:298] selected driver: qemu2
	I0910 14:05:10.744757    3915 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:05:10.744767    3915 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:05:10.747175    3915 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:05:10.749821    3915 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:05:10.752926    3915 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 14:05:10.752945    3915 cni.go:84] Creating CNI manager for ""
	I0910 14:05:10.752953    3915 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:05:10.752956    3915 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:05:10.752966    3915 start_flags.go:321] config:
	{Name:force-systemd-env-593000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-593000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:05:10.757279    3915 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:05:10.764837    3915 out.go:177] * Starting control plane node force-systemd-env-593000 in cluster force-systemd-env-593000
	I0910 14:05:10.768884    3915 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:05:10.768919    3915 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:05:10.768931    3915 cache.go:57] Caching tarball of preloaded images
	I0910 14:05:10.769018    3915 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:05:10.769024    3915 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:05:10.769118    3915 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/force-systemd-env-593000/config.json ...
	I0910 14:05:10.769131    3915 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/force-systemd-env-593000/config.json: {Name:mkd59d4c190d9469103f8047a67c5c19964fdd92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:05:10.769337    3915 start.go:365] acquiring machines lock for force-systemd-env-593000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:10.769367    3915 start.go:369] acquired machines lock for "force-systemd-env-593000" in 23.042µs
	I0910 14:05:10.769377    3915 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-593000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:10.769408    3915 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:10.777901    3915 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:10.792437    3915 start.go:159] libmachine.API.Create for "force-systemd-env-593000" (driver="qemu2")
	I0910 14:05:10.792463    3915 client.go:168] LocalClient.Create starting
	I0910 14:05:10.792519    3915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:10.792554    3915 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:10.792566    3915 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:10.792608    3915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:10.792626    3915 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:10.792636    3915 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:10.793011    3915 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:10.944449    3915 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:11.030166    3915 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:11.030176    3915 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:11.030326    3915 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2
	I0910 14:05:11.049900    3915 main.go:141] libmachine: STDOUT: 
	I0910 14:05:11.049914    3915 main.go:141] libmachine: STDERR: 
	I0910 14:05:11.049975    3915 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2 +20000M
	I0910 14:05:11.057919    3915 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:11.057930    3915 main.go:141] libmachine: STDERR: 
	I0910 14:05:11.057945    3915 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2
	I0910 14:05:11.057952    3915 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:11.057983    3915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c6:72:a4:17:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2
	I0910 14:05:11.059522    3915 main.go:141] libmachine: STDOUT: 
	I0910 14:05:11.059534    3915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:11.059554    3915 client.go:171] LocalClient.Create took 267.087292ms
	I0910 14:05:13.061741    3915 start.go:128] duration metric: createHost completed in 2.292311208s
	I0910 14:05:13.061832    3915 start.go:83] releasing machines lock for "force-systemd-env-593000", held for 2.292460625s
	W0910 14:05:13.061949    3915 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:13.070322    3915 out.go:177] * Deleting "force-systemd-env-593000" in qemu2 ...
	W0910 14:05:13.091620    3915 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:13.091691    3915 start.go:687] Will try again in 5 seconds ...
	I0910 14:05:18.093903    3915 start.go:365] acquiring machines lock for force-systemd-env-593000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:05:18.094366    3915 start.go:369] acquired machines lock for "force-systemd-env-593000" in 359.167µs
	I0910 14:05:18.094496    3915 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-593000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-593000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:05:18.094845    3915 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:05:18.103408    3915 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0910 14:05:18.150041    3915 start.go:159] libmachine.API.Create for "force-systemd-env-593000" (driver="qemu2")
	I0910 14:05:18.150093    3915 client.go:168] LocalClient.Create starting
	I0910 14:05:18.150227    3915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:05:18.150285    3915 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:18.150304    3915 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:18.150374    3915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:05:18.150413    3915 main.go:141] libmachine: Decoding PEM data...
	I0910 14:05:18.150431    3915 main.go:141] libmachine: Parsing certificate...
	I0910 14:05:18.150905    3915 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:05:18.279646    3915 main.go:141] libmachine: Creating SSH key...
	I0910 14:05:18.346533    3915 main.go:141] libmachine: Creating Disk image...
	I0910 14:05:18.346538    3915 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:05:18.346676    3915 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2
	I0910 14:05:18.355153    3915 main.go:141] libmachine: STDOUT: 
	I0910 14:05:18.355169    3915 main.go:141] libmachine: STDERR: 
	I0910 14:05:18.355222    3915 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2 +20000M
	I0910 14:05:18.362358    3915 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:05:18.362369    3915 main.go:141] libmachine: STDERR: 
	I0910 14:05:18.362381    3915 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2
	I0910 14:05:18.362388    3915 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:05:18.362431    3915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:2f:68:7d:96:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/force-systemd-env-593000/disk.qcow2
	I0910 14:05:18.363918    3915 main.go:141] libmachine: STDOUT: 
	I0910 14:05:18.363932    3915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:05:18.363943    3915 client.go:171] LocalClient.Create took 213.841917ms
	I0910 14:05:20.366098    3915 start.go:128] duration metric: createHost completed in 2.271235833s
	I0910 14:05:20.366163    3915 start.go:83] releasing machines lock for "force-systemd-env-593000", held for 2.27177725s
	W0910 14:05:20.366533    3915 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-593000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:05:20.375259    3915 out.go:177] 
	W0910 14:05:20.380247    3915 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:05:20.380273    3915 out.go:239] * 
	* 
	W0910 14:05:20.382654    3915 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:05:20.391201    3915 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-593000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-593000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-593000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.248125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-593000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-593000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-10 14:05:20.48485 -0700 PDT m=+793.168989793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-593000 -n force-systemd-env-593000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-593000 -n force-systemd-env-593000: exit status 7 (33.394666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-593000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-593000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-593000
--- FAIL: TestForceSystemdEnv (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-765000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-765000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-msvtv" [b00805a2-299c-484e-b7c6-ee53ad911ca3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-msvtv" [b00805a2-299c-484e-b7c6-ee53ad911ca3] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008754958s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:32470
functional_test.go:1660: error fetching http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:32470: Get "http://192.168.105.4:32470": dial tcp 192.168.105.4:32470: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-765000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-msvtv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-765000/192.168.105.4
Start Time:       Sun, 10 Sep 2023 13:56:33 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://35f965bbc3eac4288bb0174e8c8de9199a0dec7cfdc3fb24cc885a381b26710f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Sun, 10 Sep 2023 13:56:49 -0700
Finished:     Sun, 10 Sep 2023 13:56:49 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sun, 10 Sep 2023 13:56:34 -0700
Finished:     Sun, 10 Sep 2023 13:56:34 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tkjmt (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-tkjmt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  27s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-msvtv to functional-765000
Normal   Pulled     11s (x3 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    11s (x3 over 26s)  kubelet            Created container echoserver-arm
Normal   Started    11s (x3 over 26s)  kubelet            Started container echoserver-arm
Warning  BackOff    11s (x2 over 25s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-msvtv_default(b00805a2-299c-484e-b7c6-ee53ad911ca3)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-765000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-765000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.176.146
IPs:                      10.104.176.146
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32470/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-765000 -n functional-765000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | functional-765000 addons list                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-765000 service                                                                                            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-765000                                                                                                 | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3013722247/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh -- ls                                                                                          | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh cat                                                                                            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | /mount-9p/test-1694379411616744000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh stat                                                                                           | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh stat                                                                                           | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh sudo                                                                                           | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-765000                                                                                                 | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1765699636/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh -- ls                                                                                          | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh sudo                                                                                           | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-765000                                                                                                 | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount1    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-765000                                                                                                 | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount3    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-765000                                                                                                 | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount2    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT | 10 Sep 23 13:56 PDT |
	|         | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-765000 ssh findmnt                                                                                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:56 PDT |                     |
	|         | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/10 13:55:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 13:55:30.261656    2534 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:55:30.261766    2534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:55:30.261767    2534 out.go:309] Setting ErrFile to fd 2...
	I0910 13:55:30.261769    2534 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:55:30.261877    2534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 13:55:30.262895    2534 out.go:303] Setting JSON to false
	I0910 13:55:30.278093    2534 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1505,"bootTime":1694377825,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:55:30.278144    2534 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:55:30.282161    2534 out.go:177] * [functional-765000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:55:30.290199    2534 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 13:55:30.294212    2534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:55:30.290304    2534 notify.go:220] Checking for updates...
	I0910 13:55:30.300173    2534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:55:30.303249    2534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:55:30.306154    2534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 13:55:30.309186    2534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 13:55:30.312443    2534 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:55:30.312496    2534 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:55:30.316130    2534 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 13:55:30.323162    2534 start.go:298] selected driver: qemu2
	I0910 13:55:30.323165    2534 start.go:902] validating driver "qemu2" against &{Name:functional-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-765000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:55:30.323207    2534 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 13:55:30.325120    2534 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 13:55:30.325142    2534 cni.go:84] Creating CNI manager for ""
	I0910 13:55:30.325146    2534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:55:30.325151    2534 start_flags.go:321] config:
	{Name:functional-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-765000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:55:30.328831    2534 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 13:55:30.337128    2534 out.go:177] * Starting control plane node functional-765000 in cluster functional-765000
	I0910 13:55:30.341219    2534 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:55:30.341231    2534 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 13:55:30.341242    2534 cache.go:57] Caching tarball of preloaded images
	I0910 13:55:30.341290    2534 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 13:55:30.341293    2534 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 13:55:30.341345    2534 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/config.json ...
	I0910 13:55:30.341646    2534 start.go:365] acquiring machines lock for functional-765000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 13:55:30.341671    2534 start.go:369] acquired machines lock for "functional-765000" in 21.75µs
	I0910 13:55:30.341678    2534 start.go:96] Skipping create...Using existing machine configuration
	I0910 13:55:30.341682    2534 fix.go:54] fixHost starting: 
	I0910 13:55:30.342243    2534 fix.go:102] recreateIfNeeded on functional-765000: state=Running err=<nil>
	W0910 13:55:30.342249    2534 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 13:55:30.346158    2534 out.go:177] * Updating the running qemu2 "functional-765000" VM ...
	I0910 13:55:30.354201    2534 machine.go:88] provisioning docker machine ...
	I0910 13:55:30.354213    2534 buildroot.go:166] provisioning hostname "functional-765000"
	I0910 13:55:30.354239    2534 main.go:141] libmachine: Using SSH client type: native
	I0910 13:55:30.354476    2534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f63b0] 0x1005f8e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0910 13:55:30.354480    2534 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-765000 && echo "functional-765000" | sudo tee /etc/hostname
	I0910 13:55:30.417445    2534 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-765000
	
	I0910 13:55:30.417490    2534 main.go:141] libmachine: Using SSH client type: native
	I0910 13:55:30.417740    2534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f63b0] 0x1005f8e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0910 13:55:30.417747    2534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-765000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-765000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-765000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 13:55:30.476878    2534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 13:55:30.476885    2534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17207-1093/.minikube CaCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17207-1093/.minikube}
	I0910 13:55:30.476891    2534 buildroot.go:174] setting up certificates
	I0910 13:55:30.476897    2534 provision.go:83] configureAuth start
	I0910 13:55:30.476900    2534 provision.go:138] copyHostCerts
	I0910 13:55:30.476957    2534 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem, removing ...
	I0910 13:55:30.476960    2534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem
	I0910 13:55:30.477052    2534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem (1078 bytes)
	I0910 13:55:30.477202    2534 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem, removing ...
	I0910 13:55:30.477203    2534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem
	I0910 13:55:30.477243    2534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem (1123 bytes)
	I0910 13:55:30.477339    2534 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem, removing ...
	I0910 13:55:30.477341    2534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem
	I0910 13:55:30.477380    2534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem (1675 bytes)
	I0910 13:55:30.477445    2534 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem org=jenkins.functional-765000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-765000]
	I0910 13:55:30.554797    2534 provision.go:172] copyRemoteCerts
	I0910 13:55:30.554828    2534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 13:55:30.554833    2534 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
	I0910 13:55:30.585380    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 13:55:30.593041    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0910 13:55:30.600482    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 13:55:30.607108    2534 provision.go:86] duration metric: configureAuth took 130.206167ms
	I0910 13:55:30.607113    2534 buildroot.go:189] setting minikube options for container-runtime
	I0910 13:55:30.607215    2534 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:55:30.607254    2534 main.go:141] libmachine: Using SSH client type: native
	I0910 13:55:30.607476    2534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f63b0] 0x1005f8e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0910 13:55:30.607479    2534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 13:55:30.665003    2534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 13:55:30.665013    2534 buildroot.go:70] root file system type: tmpfs
	I0910 13:55:30.665071    2534 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 13:55:30.665146    2534 main.go:141] libmachine: Using SSH client type: native
	I0910 13:55:30.665394    2534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f63b0] 0x1005f8e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0910 13:55:30.665428    2534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 13:55:30.726134    2534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 13:55:30.726188    2534 main.go:141] libmachine: Using SSH client type: native
	I0910 13:55:30.726434    2534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f63b0] 0x1005f8e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0910 13:55:30.726441    2534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 13:55:30.785160    2534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 13:55:30.785166    2534 machine.go:91] provisioned docker machine in 430.966292ms
	I0910 13:55:30.785169    2534 start.go:300] post-start starting for "functional-765000" (driver="qemu2")
	I0910 13:55:30.785173    2534 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 13:55:30.785221    2534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 13:55:30.785227    2534 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
	I0910 13:55:30.816292    2534 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 13:55:30.817806    2534 info.go:137] Remote host: Buildroot 2021.02.12
	I0910 13:55:30.817813    2534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/addons for local assets ...
	I0910 13:55:30.817879    2534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/files for local assets ...
	I0910 13:55:30.817981    2534 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem -> 22002.pem in /etc/ssl/certs
	I0910 13:55:30.818081    2534 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/test/nested/copy/2200/hosts -> hosts in /etc/test/nested/copy/2200
	I0910 13:55:30.818110    2534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2200
	I0910 13:55:30.821544    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem --> /etc/ssl/certs/22002.pem (1708 bytes)
	I0910 13:55:30.828711    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/test/nested/copy/2200/hosts --> /etc/test/nested/copy/2200/hosts (40 bytes)
	I0910 13:55:30.835917    2534 start.go:303] post-start completed in 50.744042ms
	I0910 13:55:30.835921    2534 fix.go:56] fixHost completed within 494.246625ms
	I0910 13:55:30.835959    2534 main.go:141] libmachine: Using SSH client type: native
	I0910 13:55:30.836191    2534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f63b0] 0x1005f8e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0910 13:55:30.836194    2534 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0910 13:55:30.892693    2534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694379330.991977371
	
	I0910 13:55:30.892697    2534 fix.go:206] guest clock: 1694379330.991977371
	I0910 13:55:30.892700    2534 fix.go:219] Guest: 2023-09-10 13:55:30.991977371 -0700 PDT Remote: 2023-09-10 13:55:30.835922 -0700 PDT m=+0.593634543 (delta=156.055371ms)
	I0910 13:55:30.892709    2534 fix.go:190] guest clock delta is within tolerance: 156.055371ms
	I0910 13:55:30.892710    2534 start.go:83] releasing machines lock for "functional-765000", held for 551.042625ms
	I0910 13:55:30.893008    2534 ssh_runner.go:195] Run: cat /version.json
	I0910 13:55:30.893014    2534 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
	I0910 13:55:30.893023    2534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 13:55:30.893037    2534 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
	I0910 13:55:30.923299    2534 ssh_runner.go:195] Run: systemctl --version
	I0910 13:55:30.925525    2534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 13:55:30.964631    2534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 13:55:30.964667    2534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 13:55:30.967246    2534 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0910 13:55:30.967250    2534 start.go:466] detecting cgroup driver to use...
	I0910 13:55:30.967313    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:55:30.972922    2534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0910 13:55:30.976673    2534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 13:55:30.979865    2534 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 13:55:30.979887    2534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 13:55:30.982984    2534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:55:30.986023    2534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 13:55:30.988925    2534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:55:30.992617    2534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 13:55:30.995613    2534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 13:55:30.998450    2534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 13:55:31.001744    2534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 13:55:31.005083    2534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:55:31.084850    2534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 13:55:31.091340    2534 start.go:466] detecting cgroup driver to use...
	I0910 13:55:31.091390    2534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 13:55:31.097972    2534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:55:31.103407    2534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 13:55:31.110974    2534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:55:31.115580    2534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 13:55:31.119857    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:55:31.125776    2534 ssh_runner.go:195] Run: which cri-dockerd
	I0910 13:55:31.127232    2534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 13:55:31.129895    2534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0910 13:55:31.135344    2534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 13:55:31.218610    2534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 13:55:31.303470    2534 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 13:55:31.303478    2534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0910 13:55:31.308484    2534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:55:31.382240    2534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 13:55:42.690047    2534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.307916959s)
	I0910 13:55:42.690109    2534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 13:55:42.758950    2534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 13:55:42.819152    2534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 13:55:42.886904    2534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:55:42.951187    2534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 13:55:42.959245    2534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:55:43.036390    2534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0910 13:55:43.061960    2534 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 13:55:43.062049    2534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 13:55:43.064326    2534 start.go:534] Will wait 60s for crictl version
	I0910 13:55:43.064368    2534 ssh_runner.go:195] Run: which crictl
	I0910 13:55:43.065872    2534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 13:55:43.078141    2534 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0910 13:55:43.078215    2534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:55:43.093952    2534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:55:43.109022    2534 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0910 13:55:43.109155    2534 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0910 13:55:43.112952    2534 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0910 13:55:43.117059    2534 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:55:43.117115    2534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:55:43.122930    2534 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-765000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0910 13:55:43.122938    2534 docker.go:566] Images already preloaded, skipping extraction
	I0910 13:55:43.122982    2534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:55:43.131882    2534 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-765000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0910 13:55:43.131888    2534 cache_images.go:84] Images are preloaded, skipping loading
	I0910 13:55:43.131941    2534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 13:55:43.139166    2534 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0910 13:55:43.139182    2534 cni.go:84] Creating CNI manager for ""
	I0910 13:55:43.139187    2534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:55:43.139190    2534 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0910 13:55:43.139198    2534 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-765000 NodeName:functional-765000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 13:55:43.139257    2534 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-765000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 13:55:43.139284    2534 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-765000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-765000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0910 13:55:43.139338    2534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0910 13:55:43.142395    2534 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 13:55:43.142418    2534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 13:55:43.145362    2534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0910 13:55:43.150367    2534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 13:55:43.155361    2534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0910 13:55:43.160045    2534 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0910 13:55:43.161303    2534 certs.go:56] Setting up /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000 for IP: 192.168.105.4
	I0910 13:55:43.161309    2534 certs.go:190] acquiring lock for shared ca certs: {Name:mk28134b321cd562735798fd2fcb10a58019fa5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:55:43.161446    2534 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key
	I0910 13:55:43.161482    2534 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key
	I0910 13:55:43.161531    2534 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.key
	I0910 13:55:43.161576    2534 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/apiserver.key.942c473b
	I0910 13:55:43.161613    2534 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/proxy-client.key
	I0910 13:55:43.161762    2534 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200.pem (1338 bytes)
	W0910 13:55:43.161785    2534 certs.go:433] ignoring /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200_empty.pem, impossibly tiny 0 bytes
	I0910 13:55:43.161791    2534 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 13:55:43.161809    2534 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem (1078 bytes)
	I0910 13:55:43.161827    2534 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem (1123 bytes)
	I0910 13:55:43.161850    2534 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem (1675 bytes)
	I0910 13:55:43.161886    2534 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem (1708 bytes)
	I0910 13:55:43.162214    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0910 13:55:43.169636    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 13:55:43.176407    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 13:55:43.183033    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 13:55:43.189829    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 13:55:43.197202    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 13:55:43.204152    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 13:55:43.210837    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 13:55:43.217902    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 13:55:43.225008    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200.pem --> /usr/share/ca-certificates/2200.pem (1338 bytes)
	I0910 13:55:43.231819    2534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem --> /usr/share/ca-certificates/22002.pem (1708 bytes)
	I0910 13:55:43.238412    2534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 13:55:43.243323    2534 ssh_runner.go:195] Run: openssl version
	I0910 13:55:43.245076    2534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 13:55:43.248090    2534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:55:43.249467    2534 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 10 20:53 /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:55:43.249488    2534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:55:43.251310    2534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 13:55:43.254090    2534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2200.pem && ln -fs /usr/share/ca-certificates/2200.pem /etc/ssl/certs/2200.pem"
	I0910 13:55:43.257516    2534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2200.pem
	I0910 13:55:43.258946    2534 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 10 20:54 /usr/share/ca-certificates/2200.pem
	I0910 13:55:43.258963    2534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2200.pem
	I0910 13:55:43.260799    2534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2200.pem /etc/ssl/certs/51391683.0"
	I0910 13:55:43.263487    2534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22002.pem && ln -fs /usr/share/ca-certificates/22002.pem /etc/ssl/certs/22002.pem"
	I0910 13:55:43.266537    2534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22002.pem
	I0910 13:55:43.267984    2534 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 10 20:54 /usr/share/ca-certificates/22002.pem
	I0910 13:55:43.268005    2534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22002.pem
	I0910 13:55:43.269746    2534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22002.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 13:55:43.273248    2534 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0910 13:55:43.274692    2534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0910 13:55:43.276504    2534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0910 13:55:43.278281    2534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0910 13:55:43.280030    2534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0910 13:55:43.281839    2534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0910 13:55:43.283619    2534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0910 13:55:43.285447    2534 kubeadm.go:404] StartCluster: {Name:functional-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.1 ClusterName:functional-765000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:55:43.285515    2534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 13:55:43.293417    2534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 13:55:43.296982    2534 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0910 13:55:43.296989    2534 kubeadm.go:636] restartCluster start
	I0910 13:55:43.297011    2534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0910 13:55:43.299979    2534 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0910 13:55:43.300260    2534 kubeconfig.go:92] found "functional-765000" server: "https://192.168.105.4:8441"
	I0910 13:55:43.300965    2534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0910 13:55:43.304011    2534 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0910 13:55:43.304015    2534 kubeadm.go:1128] stopping kube-system containers ...
	I0910 13:55:43.304054    2534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 13:55:43.314441    2534 docker.go:462] Stopping containers: [f974a239353b 867b8c503c39 c3c4cff188ec ac2c7a43196d 896c07b9cbfd 8031748e1286 cf61f5a6a2a0 99faa6441be3 4fe99976d2cf 05673841f64e 08121c1f89e1 e78c6008ae37 93c5943ed397 151b4b540104 7cf93ae24d19 594dfb417350 729bb00a1398 93efcc2e5792 4e9d6008aefa bdde6e1273b3 f06674c18e97 ccb921e20ead fe8b9b9de0de 83a3a5ca0a8c 7a09d2078621 f0df727681f6 549a1fb1a085 54a0e4ce3a59]
	I0910 13:55:43.314603    2534 ssh_runner.go:195] Run: docker stop f974a239353b 867b8c503c39 c3c4cff188ec ac2c7a43196d 896c07b9cbfd 8031748e1286 cf61f5a6a2a0 99faa6441be3 4fe99976d2cf 05673841f64e 08121c1f89e1 e78c6008ae37 93c5943ed397 151b4b540104 7cf93ae24d19 594dfb417350 729bb00a1398 93efcc2e5792 4e9d6008aefa bdde6e1273b3 f06674c18e97 ccb921e20ead fe8b9b9de0de 83a3a5ca0a8c 7a09d2078621 f0df727681f6 549a1fb1a085 54a0e4ce3a59
	I0910 13:55:43.321967    2534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0910 13:55:43.415158    2534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 13:55:43.419621    2534 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 10 20:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep 10 20:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 10 20:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 10 20:54 /etc/kubernetes/scheduler.conf
	
	I0910 13:55:43.419653    2534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0910 13:55:43.423444    2534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0910 13:55:43.426868    2534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0910 13:55:43.430029    2534 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 13:55:43.430052    2534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0910 13:55:43.433680    2534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0910 13:55:43.437211    2534 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0910 13:55:43.437231    2534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0910 13:55:43.440506    2534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 13:55:43.443360    2534 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0910 13:55:43.443363    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 13:55:43.465322    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 13:55:44.033641    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0910 13:55:44.130737    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 13:55:44.153793    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0910 13:55:44.178121    2534 api_server.go:52] waiting for apiserver process to appear ...
	I0910 13:55:44.178168    2534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 13:55:44.182694    2534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 13:55:44.688510    2534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 13:55:45.188531    2534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 13:55:45.193296    2534 api_server.go:72] duration metric: took 1.015188s to wait for apiserver process to appear ...
	I0910 13:55:45.193302    2534 api_server.go:88] waiting for apiserver healthz status ...
	I0910 13:55:45.193310    2534 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0910 13:55:47.504421    2534 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0910 13:55:47.504428    2534 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0910 13:55:47.504433    2534 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0910 13:55:47.549116    2534 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0910 13:55:47.549127    2534 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0910 13:55:48.051193    2534 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0910 13:55:48.055003    2534 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0910 13:55:48.055012    2534 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0910 13:55:48.551197    2534 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0910 13:55:48.554666    2534 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0910 13:55:48.554673    2534 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0910 13:55:49.051153    2534 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0910 13:55:49.054516    2534 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0910 13:55:49.059509    2534 api_server.go:141] control plane version: v1.28.1
	I0910 13:55:49.059514    2534 api_server.go:131] duration metric: took 3.866251041s to wait for apiserver health ...
	I0910 13:55:49.059520    2534 cni.go:84] Creating CNI manager for ""
	I0910 13:55:49.059525    2534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:55:49.062705    2534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 13:55:49.066689    2534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 13:55:49.069932    2534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0910 13:55:49.075585    2534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 13:55:49.080233    2534 system_pods.go:59] 7 kube-system pods found
	I0910 13:55:49.080242    2534 system_pods.go:61] "coredns-5dd5756b68-z5fsl" [66bb620b-3bad-4123-9a91-e96a0e0a7676] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0910 13:55:49.080245    2534 system_pods.go:61] "etcd-functional-765000" [2926ab38-bcea-4ab2-a0d0-5b4eb929b923] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0910 13:55:49.080249    2534 system_pods.go:61] "kube-apiserver-functional-765000" [6c573c1a-b722-4483-803a-37977adb7144] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0910 13:55:49.080251    2534 system_pods.go:61] "kube-controller-manager-functional-765000" [e1d880a8-127c-476a-898e-f5f1f42067bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0910 13:55:49.080254    2534 system_pods.go:61] "kube-proxy-gfpm9" [e1cb83da-6d9b-40ea-bae2-0c49f4e4fed5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0910 13:55:49.080257    2534 system_pods.go:61] "kube-scheduler-functional-765000" [e3bbbbdc-39ac-4cb4-b1b0-5238a19d884e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0910 13:55:49.080259    2534 system_pods.go:61] "storage-provisioner" [43600b09-a295-4a4f-8094-df7394d753b7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0910 13:55:49.080261    2534 system_pods.go:74] duration metric: took 4.672333ms to wait for pod list to return data ...
	I0910 13:55:49.080264    2534 node_conditions.go:102] verifying NodePressure condition ...
	I0910 13:55:49.081930    2534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0910 13:55:49.081937    2534 node_conditions.go:123] node cpu capacity is 2
	I0910 13:55:49.081942    2534 node_conditions.go:105] duration metric: took 1.676209ms to run NodePressure ...
	I0910 13:55:49.081949    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0910 13:55:49.170470    2534 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0910 13:55:49.172732    2534 kubeadm.go:787] kubelet initialised
	I0910 13:55:49.172736    2534 kubeadm.go:788] duration metric: took 2.259542ms waiting for restarted kubelet to initialise ...
	I0910 13:55:49.172739    2534 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 13:55:49.175407    2534 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace to be "Ready" ...
	I0910 13:55:51.184709    2534 pod_ready.go:102] pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace has status "Ready":"False"
	I0910 13:55:53.185128    2534 pod_ready.go:102] pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace has status "Ready":"False"
	I0910 13:55:55.685484    2534 pod_ready.go:92] pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace has status "Ready":"True"
	I0910 13:55:55.685490    2534 pod_ready.go:81] duration metric: took 6.510147875s waiting for pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace to be "Ready" ...
	I0910 13:55:55.685494    2534 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:55:56.695205    2534 pod_ready.go:92] pod "etcd-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:55:56.695212    2534 pod_ready.go:81] duration metric: took 1.009725917s waiting for pod "etcd-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:55:56.695217    2534 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:55:58.704591    2534 pod_ready.go:102] pod "kube-apiserver-functional-765000" in "kube-system" namespace has status "Ready":"False"
	I0910 13:56:00.705211    2534 pod_ready.go:102] pod "kube-apiserver-functional-765000" in "kube-system" namespace has status "Ready":"False"
	I0910 13:56:02.204589    2534 pod_ready.go:92] pod "kube-apiserver-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:02.204594    2534 pod_ready.go:81] duration metric: took 5.50943275s waiting for pod "kube-apiserver-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.204598    2534 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.207189    2534 pod_ready.go:92] pod "kube-controller-manager-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:02.207191    2534 pod_ready.go:81] duration metric: took 2.5915ms waiting for pod "kube-controller-manager-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.207195    2534 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gfpm9" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.209275    2534 pod_ready.go:92] pod "kube-proxy-gfpm9" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:02.209277    2534 pod_ready.go:81] duration metric: took 2.080708ms waiting for pod "kube-proxy-gfpm9" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.209280    2534 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.211331    2534 pod_ready.go:92] pod "kube-scheduler-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:02.211333    2534 pod_ready.go:81] duration metric: took 2.051416ms waiting for pod "kube-scheduler-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.211340    2534 pod_ready.go:38] duration metric: took 13.038731667s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 13:56:02.211347    2534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 13:56:02.215358    2534 ops.go:34] apiserver oom_adj: -16
	I0910 13:56:02.215363    2534 kubeadm.go:640] restartCluster took 18.918571667s
	I0910 13:56:02.215366    2534 kubeadm.go:406] StartCluster complete in 18.930121208s
	I0910 13:56:02.215373    2534 settings.go:142] acquiring lock: {Name:mk5069f344fe5f68592bc6867db9aede10bc3fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:56:02.215452    2534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:56:02.215778    2534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/kubeconfig: {Name:mk7c70008fc2d1b0ba569659f9157708891e79a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:56:02.215992    2534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 13:56:02.216027    2534 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0910 13:56:02.216057    2534 addons.go:69] Setting storage-provisioner=true in profile "functional-765000"
	I0910 13:56:02.216064    2534 addons.go:231] Setting addon storage-provisioner=true in "functional-765000"
	W0910 13:56:02.216066    2534 addons.go:240] addon storage-provisioner should already be in state true
	I0910 13:56:02.216093    2534 host.go:66] Checking if "functional-765000" exists ...
	I0910 13:56:02.216098    2534 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:56:02.216103    2534 addons.go:69] Setting default-storageclass=true in profile "functional-765000"
	I0910 13:56:02.216130    2534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-765000"
	W0910 13:56:02.216342    2534 host.go:54] host status for "functional-765000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/monitor: connect: connection refused
	W0910 13:56:02.216348    2534 addons.go:277] "functional-765000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0910 13:56:02.217754    2534 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-765000" context rescaled to 1 replicas
	I0910 13:56:02.217762    2534 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 13:56:02.225000    2534 out.go:177] * Verifying Kubernetes components...
	I0910 13:56:02.221850    2534 addons.go:231] Setting addon default-storageclass=true in "functional-765000"
	W0910 13:56:02.225011    2534 addons.go:240] addon default-storageclass should already be in state true
	I0910 13:56:02.229126    2534 host.go:66] Checking if "functional-765000" exists ...
	I0910 13:56:02.229153    2534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 13:56:02.229909    2534 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 13:56:02.229913    2534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 13:56:02.229918    2534 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
	I0910 13:56:02.275641    2534 node_ready.go:35] waiting up to 6m0s for node "functional-765000" to be "Ready" ...
	I0910 13:56:02.275645    2534 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0910 13:56:02.281796    2534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 13:56:02.283266    2534 node_ready.go:49] node "functional-765000" has status "Ready":"True"
	I0910 13:56:02.283272    2534 node_ready.go:38] duration metric: took 7.621083ms waiting for node "functional-765000" to be "Ready" ...
	I0910 13:56:02.283275    2534 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 13:56:02.487471    2534 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.513383    2534 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0910 13:56:02.516370    2534 addons.go:502] enable addons completed in 300.345583ms: enabled=[storage-provisioner default-storageclass]
	I0910 13:56:02.885286    2534 pod_ready.go:92] pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:02.885292    2534 pod_ready.go:81] duration metric: took 397.819292ms waiting for pod "coredns-5dd5756b68-z5fsl" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:02.885297    2534 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:03.285420    2534 pod_ready.go:92] pod "etcd-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:03.285425    2534 pod_ready.go:81] duration metric: took 400.130792ms waiting for pod "etcd-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:03.285430    2534 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:03.685064    2534 pod_ready.go:92] pod "kube-apiserver-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:03.685069    2534 pod_ready.go:81] duration metric: took 399.640917ms waiting for pod "kube-apiserver-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:03.685074    2534 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:04.085393    2534 pod_ready.go:92] pod "kube-controller-manager-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:04.085399    2534 pod_ready.go:81] duration metric: took 400.326458ms waiting for pod "kube-controller-manager-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:04.085404    2534 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gfpm9" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:04.485025    2534 pod_ready.go:92] pod "kube-proxy-gfpm9" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:04.485029    2534 pod_ready.go:81] duration metric: took 399.627125ms waiting for pod "kube-proxy-gfpm9" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:04.485033    2534 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:04.884872    2534 pod_ready.go:92] pod "kube-scheduler-functional-765000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:56:04.884876    2534 pod_ready.go:81] duration metric: took 399.845084ms waiting for pod "kube-scheduler-functional-765000" in "kube-system" namespace to be "Ready" ...
	I0910 13:56:04.884879    2534 pod_ready.go:38] duration metric: took 2.601628417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 13:56:04.884891    2534 api_server.go:52] waiting for apiserver process to appear ...
	I0910 13:56:04.884967    2534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 13:56:04.890317    2534 api_server.go:72] duration metric: took 2.672570917s to wait for apiserver process to appear ...
	I0910 13:56:04.890321    2534 api_server.go:88] waiting for apiserver healthz status ...
	I0910 13:56:04.890330    2534 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0910 13:56:04.893640    2534 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0910 13:56:04.894405    2534 api_server.go:141] control plane version: v1.28.1
	I0910 13:56:04.894408    2534 api_server.go:131] duration metric: took 4.085167ms to wait for apiserver health ...
	I0910 13:56:04.894410    2534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 13:56:05.087507    2534 system_pods.go:59] 7 kube-system pods found
	I0910 13:56:05.087512    2534 system_pods.go:61] "coredns-5dd5756b68-z5fsl" [66bb620b-3bad-4123-9a91-e96a0e0a7676] Running
	I0910 13:56:05.087515    2534 system_pods.go:61] "etcd-functional-765000" [2926ab38-bcea-4ab2-a0d0-5b4eb929b923] Running
	I0910 13:56:05.087516    2534 system_pods.go:61] "kube-apiserver-functional-765000" [6c573c1a-b722-4483-803a-37977adb7144] Running
	I0910 13:56:05.087518    2534 system_pods.go:61] "kube-controller-manager-functional-765000" [e1d880a8-127c-476a-898e-f5f1f42067bc] Running
	I0910 13:56:05.087519    2534 system_pods.go:61] "kube-proxy-gfpm9" [e1cb83da-6d9b-40ea-bae2-0c49f4e4fed5] Running
	I0910 13:56:05.087521    2534 system_pods.go:61] "kube-scheduler-functional-765000" [e3bbbbdc-39ac-4cb4-b1b0-5238a19d884e] Running
	I0910 13:56:05.087522    2534 system_pods.go:61] "storage-provisioner" [43600b09-a295-4a4f-8094-df7394d753b7] Running
	I0910 13:56:05.087524    2534 system_pods.go:74] duration metric: took 193.115084ms to wait for pod list to return data ...
	I0910 13:56:05.087527    2534 default_sa.go:34] waiting for default service account to be created ...
	I0910 13:56:05.283769    2534 default_sa.go:45] found service account: "default"
	I0910 13:56:05.283776    2534 default_sa.go:55] duration metric: took 196.250125ms for default service account to be created ...
	I0910 13:56:05.283780    2534 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 13:56:05.487053    2534 system_pods.go:86] 7 kube-system pods found
	I0910 13:56:05.487059    2534 system_pods.go:89] "coredns-5dd5756b68-z5fsl" [66bb620b-3bad-4123-9a91-e96a0e0a7676] Running
	I0910 13:56:05.487061    2534 system_pods.go:89] "etcd-functional-765000" [2926ab38-bcea-4ab2-a0d0-5b4eb929b923] Running
	I0910 13:56:05.487063    2534 system_pods.go:89] "kube-apiserver-functional-765000" [6c573c1a-b722-4483-803a-37977adb7144] Running
	I0910 13:56:05.487066    2534 system_pods.go:89] "kube-controller-manager-functional-765000" [e1d880a8-127c-476a-898e-f5f1f42067bc] Running
	I0910 13:56:05.487068    2534 system_pods.go:89] "kube-proxy-gfpm9" [e1cb83da-6d9b-40ea-bae2-0c49f4e4fed5] Running
	I0910 13:56:05.487069    2534 system_pods.go:89] "kube-scheduler-functional-765000" [e3bbbbdc-39ac-4cb4-b1b0-5238a19d884e] Running
	I0910 13:56:05.487071    2534 system_pods.go:89] "storage-provisioner" [43600b09-a295-4a4f-8094-df7394d753b7] Running
	I0910 13:56:05.487073    2534 system_pods.go:126] duration metric: took 203.293125ms to wait for k8s-apps to be running ...
	I0910 13:56:05.487075    2534 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 13:56:05.487129    2534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 13:56:05.492341    2534 system_svc.go:56] duration metric: took 5.2635ms WaitForService to wait for kubelet.
	I0910 13:56:05.492348    2534 kubeadm.go:581] duration metric: took 3.274613042s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0910 13:56:05.492358    2534 node_conditions.go:102] verifying NodePressure condition ...
	I0910 13:56:05.685086    2534 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0910 13:56:05.685093    2534 node_conditions.go:123] node cpu capacity is 2
	I0910 13:56:05.685100    2534 node_conditions.go:105] duration metric: took 192.741833ms to run NodePressure ...
	I0910 13:56:05.685105    2534 start.go:228] waiting for startup goroutines ...
	I0910 13:56:05.685108    2534 start.go:233] waiting for cluster config update ...
	I0910 13:56:05.685112    2534 start.go:242] writing updated cluster config ...
	I0910 13:56:05.685502    2534 ssh_runner.go:195] Run: rm -f paused
	I0910 13:56:05.715082    2534 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0910 13:56:05.719327    2534 out.go:177] * Done! kubectl is now configured to use "functional-765000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-09-10 20:54:12 UTC, ends at Sun 2023-09-10 20:57:01 UTC. --
	Sep 10 20:56:52 functional-765000 dockerd[6638]: time="2023-09-10T20:56:52.876403990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:56:52 functional-765000 dockerd[6638]: time="2023-09-10T20:56:52.876646238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:56:52 functional-765000 dockerd[6638]: time="2023-09-10T20:56:52.876659946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:56:52 functional-765000 cri-dockerd[6896]: time="2023-09-10T20:56:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/77913ddb191d2f9dc5accfb385192d87a4fc79221b9bc9390323b0941a9b6ee7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 10 20:56:54 functional-765000 cri-dockerd[6896]: time="2023-09-10T20:56:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 10 20:56:54 functional-765000 dockerd[6638]: time="2023-09-10T20:56:54.145962306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 20:56:54 functional-765000 dockerd[6638]: time="2023-09-10T20:56:54.145992181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:56:54 functional-765000 dockerd[6638]: time="2023-09-10T20:56:54.146002806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:56:54 functional-765000 dockerd[6638]: time="2023-09-10T20:56:54.146009139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:56:54 functional-765000 dockerd[6638]: time="2023-09-10T20:56:54.193038490Z" level=info msg="shim disconnected" id=fd179d10411129915c4bea8e5bf2fdde7d25ea382e8e8a4e171c64f5b8ad2178 namespace=moby
	Sep 10 20:56:54 functional-765000 dockerd[6638]: time="2023-09-10T20:56:54.193067407Z" level=warning msg="cleaning up after shim disconnected" id=fd179d10411129915c4bea8e5bf2fdde7d25ea382e8e8a4e171c64f5b8ad2178 namespace=moby
	Sep 10 20:56:54 functional-765000 dockerd[6638]: time="2023-09-10T20:56:54.193071490Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 20:56:54 functional-765000 dockerd[6632]: time="2023-09-10T20:56:54.193192615Z" level=info msg="ignoring event" container=fd179d10411129915c4bea8e5bf2fdde7d25ea382e8e8a4e171c64f5b8ad2178 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 20:56:55 functional-765000 dockerd[6632]: time="2023-09-10T20:56:55.907930314Z" level=info msg="ignoring event" container=77913ddb191d2f9dc5accfb385192d87a4fc79221b9bc9390323b0941a9b6ee7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 20:56:55 functional-765000 dockerd[6638]: time="2023-09-10T20:56:55.907966689Z" level=info msg="shim disconnected" id=77913ddb191d2f9dc5accfb385192d87a4fc79221b9bc9390323b0941a9b6ee7 namespace=moby
	Sep 10 20:56:55 functional-765000 dockerd[6638]: time="2023-09-10T20:56:55.908023564Z" level=warning msg="cleaning up after shim disconnected" id=77913ddb191d2f9dc5accfb385192d87a4fc79221b9bc9390323b0941a9b6ee7 namespace=moby
	Sep 10 20:56:55 functional-765000 dockerd[6638]: time="2023-09-10T20:56:55.908028939Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 20:57:00 functional-765000 dockerd[6638]: time="2023-09-10T20:57:00.324132831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 20:57:00 functional-765000 dockerd[6638]: time="2023-09-10T20:57:00.324227623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:00 functional-765000 dockerd[6638]: time="2023-09-10T20:57:00.324238957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:57:00 functional-765000 dockerd[6638]: time="2023-09-10T20:57:00.324245499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:00 functional-765000 dockerd[6632]: time="2023-09-10T20:57:00.357300743Z" level=info msg="ignoring event" container=ca01269826e416a538a46844a458464ad73531598ec30d11e99bfacbc2b6720b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 20:57:00 functional-765000 dockerd[6638]: time="2023-09-10T20:57:00.358227461Z" level=info msg="shim disconnected" id=ca01269826e416a538a46844a458464ad73531598ec30d11e99bfacbc2b6720b namespace=moby
	Sep 10 20:57:00 functional-765000 dockerd[6638]: time="2023-09-10T20:57:00.358264128Z" level=warning msg="cleaning up after shim disconnected" id=ca01269826e416a538a46844a458464ad73531598ec30d11e99bfacbc2b6720b namespace=moby
	Sep 10 20:57:00 functional-765000 dockerd[6638]: time="2023-09-10T20:57:00.358269461Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	ca01269826e41       72565bf5bbedf                                                                                         1 second ago         Exited              echoserver-arm            3                   78bd47d6c33df
	fd179d1041112       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 seconds ago        Exited              mount-munger              0                   77913ddb191d2
	35f965bbc3eac       72565bf5bbedf                                                                                         12 seconds ago       Exited              echoserver-arm            2                   74d39894a2b65
	5de88d1ba37fe       nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153                         16 seconds ago       Running             myfrontend                0                   a7f0325858556
	43c8a16984b2f       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                         35 seconds ago       Running             nginx                     0                   17772d571fabc
	d4b976c3b335e       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   22c8b9338422e
	2ca38be959a5b       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   f4d0833a03cf3
	75af5a79dda0c       812f5241df7fd                                                                                         About a minute ago   Running             kube-proxy                2                   b02a717975d47
	dcf3749e90a08       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   729b02b17167c
	81f74733b317b       8b6e1980b7584                                                                                         About a minute ago   Running             kube-controller-manager   2                   3904d56789a84
	42f68a94180c9       b29fb62480892                                                                                         About a minute ago   Running             kube-apiserver            0                   6cfe7de340272
	23e68828c81f6       b4a5a57e99492                                                                                         About a minute ago   Running             kube-scheduler            2                   b676b27e9b19d
	f974a239353b6       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   151b4b540104e
	867b8c503c390       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   99faa6441be3a
	c3c4cff188ec7       9cdd6470f48c8                                                                                         About a minute ago   Exited              etcd                      1                   05673841f64e9
	896c07b9cbfd8       b4a5a57e99492                                                                                         About a minute ago   Exited              kube-scheduler            1                   93c5943ed3970
	8031748e1286f       8b6e1980b7584                                                                                         About a minute ago   Exited              kube-controller-manager   1                   e78c6008ae370
	cf61f5a6a2a05       812f5241df7fd                                                                                         About a minute ago   Exited              kube-proxy                1                   4fe99976d2cfa
	
	* 
	* ==> coredns [867b8c503c39] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56373 - 58747 "HINFO IN 7165991495095165684.4648474661474318152. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005101616s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [d4b976c3b335] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36520 - 59893 "HINFO IN 339026048158379051.3530408511854837531. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004352559s
	[INFO] 10.244.0.1:63479 - 30589 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000142323s
	[INFO] 10.244.0.1:4873 - 64795 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000220818s
	[INFO] 10.244.0.1:35924 - 9784 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000026873s
	[INFO] 10.244.0.1:14076 - 22901 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001239623s
	[INFO] 10.244.0.1:40296 - 4737 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000109867s
	[INFO] 10.244.0.1:1907 - 20248 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000019249s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-765000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-765000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d731e1cec1979d094cdaebcdf1ed599ff8209767
	                    minikube.k8s.io/name=functional-765000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_10T13_54_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 10 Sep 2023 20:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-765000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 10 Sep 2023 20:56:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 10 Sep 2023 20:56:49 +0000   Sun, 10 Sep 2023 20:54:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 10 Sep 2023 20:56:49 +0000   Sun, 10 Sep 2023 20:54:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 10 Sep 2023 20:56:49 +0000   Sun, 10 Sep 2023 20:54:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 10 Sep 2023 20:56:49 +0000   Sun, 10 Sep 2023 20:54:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-765000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 cde9db0c991248b6ad510de132eb6511
	  System UUID:                cde9db0c991248b6ad510de132eb6511
	  Boot ID:                    12234b18-4b70-4ae6-a29e-2dd0a85bc08b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-msrgv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  default                     hello-node-connect-7799dfb7c6-msvtv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 coredns-5dd5756b68-z5fsl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m18s
	  kube-system                 etcd-functional-765000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-apiserver-functional-765000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-functional-765000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-proxy-gfpm9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-scheduler-functional-765000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m17s              kube-proxy       
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 115s               kube-proxy       
	  Normal  Starting                 2m31s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m31s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m31s              kubelet          Node functional-765000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s              kubelet          Node functional-765000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s              kubelet          Node functional-765000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m28s              kubelet          Node functional-765000 status is now: NodeReady
	  Normal  RegisteredNode           2m19s              node-controller  Node functional-765000 event: Registered Node functional-765000 in Controller
	  Normal  RegisteredNode           103s               node-controller  Node functional-765000 event: Registered Node functional-765000 in Controller
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node functional-765000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node functional-765000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node functional-765000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                node-controller  Node functional-765000 event: Registered Node functional-765000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.805282] systemd-fstab-generator[3745]: Ignoring "noauto" for root device
	[  +0.129179] systemd-fstab-generator[3778]: Ignoring "noauto" for root device
	[  +0.084022] systemd-fstab-generator[3789]: Ignoring "noauto" for root device
	[  +0.079601] systemd-fstab-generator[3812]: Ignoring "noauto" for root device
	[  +5.021041] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.171712] systemd-fstab-generator[4371]: Ignoring "noauto" for root device
	[  +0.069680] systemd-fstab-generator[4382]: Ignoring "noauto" for root device
	[  +0.067642] systemd-fstab-generator[4399]: Ignoring "noauto" for root device
	[  +0.066954] systemd-fstab-generator[4410]: Ignoring "noauto" for root device
	[  +0.066386] systemd-fstab-generator[4437]: Ignoring "noauto" for root device
	[Sep10 20:55] kauditd_printk_skb: 34 callbacks suppressed
	[ +25.621810] systemd-fstab-generator[6173]: Ignoring "noauto" for root device
	[  +0.131742] systemd-fstab-generator[6206]: Ignoring "noauto" for root device
	[  +0.084098] systemd-fstab-generator[6217]: Ignoring "noauto" for root device
	[  +0.081129] systemd-fstab-generator[6230]: Ignoring "noauto" for root device
	[ +11.393358] systemd-fstab-generator[6784]: Ignoring "noauto" for root device
	[  +0.060925] systemd-fstab-generator[6795]: Ignoring "noauto" for root device
	[  +0.065232] systemd-fstab-generator[6806]: Ignoring "noauto" for root device
	[  +0.066029] systemd-fstab-generator[6817]: Ignoring "noauto" for root device
	[  +0.083601] systemd-fstab-generator[6889]: Ignoring "noauto" for root device
	[  +1.086053] systemd-fstab-generator[7139]: Ignoring "noauto" for root device
	[  +4.649461] kauditd_printk_skb: 29 callbacks suppressed
	[Sep10 20:56] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.022981] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +28.942606] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [c3c4cff188ec] <==
	* {"level":"info","ts":"2023-09-10T20:55:04.707359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-10T20:55:04.707416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-10T20:55:04.707449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-10T20:55:04.707482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-10T20:55:04.707511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-10T20:55:04.707545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-10T20:55:04.710397Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-765000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-10T20:55:04.710599Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-10T20:55:04.710924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-10T20:55:04.712949Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-10T20:55:04.712967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-10T20:55:04.713779Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-10T20:55:04.713806Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-10T20:55:31.504762Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-10T20:55:31.504786Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-765000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-10T20:55:31.504832Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-10T20:55:31.504883Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-10T20:55:31.505841Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	WARNING: 2023/09/10 20:55:31 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2023-09-10T20:55:31.516643Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-10T20:55:31.516665Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-10T20:55:31.516687Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-10T20:55:31.517914Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-10T20:55:31.517948Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-10T20:55:31.517953Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-765000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [dcf3749e90a0] <==
	* {"level":"info","ts":"2023-09-10T20:55:45.73584Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-10T20:55:45.735859Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-10T20:55:45.735972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-10T20:55:45.736012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-10T20:55:45.736374Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-10T20:55:45.736423Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-10T20:55:45.736437Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-10T20:55:45.736451Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-10T20:55:45.736486Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-10T20:55:45.736493Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-10T20:55:45.736719Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-10T20:55:47.022605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-10T20:55:47.022747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-10T20:55:47.022812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-10T20:55:47.022853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-10T20:55:47.022869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-10T20:55:47.022894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-10T20:55:47.022912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-10T20:55:47.027544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-10T20:55:47.027964Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-10T20:55:47.029921Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-10T20:55:47.030032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-10T20:55:47.027547Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-765000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-10T20:55:47.030609Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-10T20:55:47.030646Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:57:01 up 2 min,  0 users,  load average: 0.49, 0.30, 0.12
	Linux functional-765000 5.10.57 #1 SMP PREEMPT Thu Sep 7 12:06:54 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [42f68a94180c] <==
	* I0910 20:55:47.690645       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0910 20:55:47.690658       1 aggregator.go:166] initial CRD sync complete...
	I0910 20:55:47.690661       1 autoregister_controller.go:141] Starting autoregister controller
	I0910 20:55:47.690665       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 20:55:47.690669       1 cache.go:39] Caches are synced for autoregister controller
	I0910 20:55:47.693913       1 shared_informer.go:318] Caches are synced for configmaps
	I0910 20:55:47.693927       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0910 20:55:47.693929       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0910 20:55:47.693970       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 20:55:47.694070       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0910 20:55:47.728237       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0910 20:55:48.591055       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0910 20:55:48.698561       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0910 20:55:48.699019       1 controller.go:624] quota admission added evaluator for: endpoints
	I0910 20:55:49.234911       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0910 20:55:49.240400       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0910 20:55:49.257616       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0910 20:55:49.267790       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 20:55:49.271048       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 20:55:59.752407       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 20:56:07.296931       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.188.84"}
	I0910 20:56:12.898540       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0910 20:56:12.953355       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.80.29"}
	I0910 20:56:23.196782       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.120.11"}
	I0910 20:56:33.631359       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.176.146"}
	
	* 
	* ==> kube-controller-manager [8031748e1286] <==
	* I0910 20:55:18.158686       1 shared_informer.go:318] Caches are synced for PV protection
	I0910 20:55:18.165989       1 shared_informer.go:318] Caches are synced for TTL
	I0910 20:55:18.168133       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0910 20:55:18.168152       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0910 20:55:18.169247       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0910 20:55:18.169275       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0910 20:55:18.180275       1 shared_informer.go:318] Caches are synced for HPA
	I0910 20:55:18.180310       1 shared_informer.go:318] Caches are synced for PVC protection
	I0910 20:55:18.180282       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0910 20:55:18.181510       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0910 20:55:18.224712       1 shared_informer.go:318] Caches are synced for expand
	I0910 20:55:18.225792       1 shared_informer.go:318] Caches are synced for endpoint
	I0910 20:55:18.226890       1 shared_informer.go:318] Caches are synced for disruption
	I0910 20:55:18.230096       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0910 20:55:18.231358       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0910 20:55:18.231377       1 shared_informer.go:318] Caches are synced for GC
	I0910 20:55:18.238771       1 shared_informer.go:318] Caches are synced for persistent volume
	I0910 20:55:18.238775       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0910 20:55:18.239930       1 shared_informer.go:318] Caches are synced for crt configmap
	I0910 20:55:18.258317       1 shared_informer.go:318] Caches are synced for attach detach
	I0910 20:55:18.365437       1 shared_informer.go:318] Caches are synced for resource quota
	I0910 20:55:18.432161       1 shared_informer.go:318] Caches are synced for resource quota
	I0910 20:55:18.752091       1 shared_informer.go:318] Caches are synced for garbage collector
	I0910 20:55:18.779557       1 shared_informer.go:318] Caches are synced for garbage collector
	I0910 20:55:18.779571       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [81f74733b317] <==
	* I0910 20:56:12.909289       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-msrgv"
	I0910 20:56:12.912406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="12.841959ms"
	I0910 20:56:12.914227       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="1.798038ms"
	I0910 20:56:12.914324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="38.254µs"
	I0910 20:56:12.914380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="10.501µs"
	I0910 20:56:12.920955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="30.212µs"
	I0910 20:56:18.559874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="385.583µs"
	I0910 20:56:19.569011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="22.211µs"
	I0910 20:56:20.579455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="21.836µs"
	I0910 20:56:30.615339       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0910 20:56:30.615592       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0910 20:56:33.591482       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0910 20:56:33.595618       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-msvtv"
	I0910 20:56:33.598797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="7.425154ms"
	I0910 20:56:33.602025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="3.202446ms"
	I0910 20:56:33.608760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="6.71087ms"
	I0910 20:56:33.608812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="35.79µs"
	I0910 20:56:33.608836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="12µs"
	I0910 20:56:34.702730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="28.123µs"
	I0910 20:56:34.710619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="17.124µs"
	I0910 20:56:35.735600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.415µs"
	I0910 20:56:46.309378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="26.374µs"
	I0910 20:56:49.311768       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="24.5µs"
	I0910 20:56:49.819910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="23.291µs"
	I0910 20:57:00.881478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="38.167µs"
	
	* 
	* ==> kube-proxy [75af5a79dda0] <==
	* I0910 20:55:48.812896       1 server_others.go:69] "Using iptables proxy"
	I0910 20:55:48.817335       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0910 20:55:48.850948       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0910 20:55:48.850959       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 20:55:48.852752       1 server_others.go:152] "Using iptables Proxier"
	I0910 20:55:48.852768       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0910 20:55:48.852822       1 server.go:846] "Version info" version="v1.28.1"
	I0910 20:55:48.852826       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 20:55:48.853734       1 config.go:188] "Starting service config controller"
	I0910 20:55:48.853738       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0910 20:55:48.853746       1 config.go:97] "Starting endpoint slice config controller"
	I0910 20:55:48.853747       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0910 20:55:48.857414       1 config.go:315] "Starting node config controller"
	I0910 20:55:48.857420       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0910 20:55:48.953962       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0910 20:55:48.953962       1 shared_informer.go:318] Caches are synced for service config
	I0910 20:55:48.957577       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [cf61f5a6a2a0] <==
	* I0910 20:55:02.901250       1 server_others.go:69] "Using iptables proxy"
	E0910 20:55:02.902310       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-765000": dial tcp 192.168.105.4:8441: connect: connection refused
	I0910 20:55:05.338998       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0910 20:55:05.364384       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0910 20:55:05.364400       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0910 20:55:05.366803       1 server_others.go:152] "Using iptables Proxier"
	I0910 20:55:05.366879       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0910 20:55:05.368667       1 server.go:846] "Version info" version="v1.28.1"
	I0910 20:55:05.368815       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 20:55:05.370504       1 config.go:188] "Starting service config controller"
	I0910 20:55:05.370550       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0910 20:55:05.370578       1 config.go:97] "Starting endpoint slice config controller"
	I0910 20:55:05.370603       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0910 20:55:05.370825       1 config.go:315] "Starting node config controller"
	I0910 20:55:05.370850       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0910 20:55:05.471141       1 shared_informer.go:318] Caches are synced for node config
	I0910 20:55:05.471149       1 shared_informer.go:318] Caches are synced for service config
	I0910 20:55:05.471157       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [23e68828c81f] <==
	* I0910 20:55:45.651407       1 serving.go:348] Generated self-signed cert in-memory
	I0910 20:55:47.661703       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0910 20:55:47.661715       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 20:55:47.663168       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0910 20:55:47.663178       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0910 20:55:47.663223       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 20:55:47.663248       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 20:55:47.663273       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0910 20:55:47.663275       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0910 20:55:47.663625       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0910 20:55:47.664226       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0910 20:55:47.764418       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0910 20:55:47.764454       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0910 20:55:47.764461       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [896c07b9cbfd] <==
	* I0910 20:55:03.517244       1 serving.go:348] Generated self-signed cert in-memory
	W0910 20:55:05.325067       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0910 20:55:05.325173       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0910 20:55:05.325197       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 20:55:05.325218       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 20:55:05.335536       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0910 20:55:05.335565       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0910 20:55:05.336536       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0910 20:55:05.336612       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0910 20:55:05.336621       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 20:55:05.336628       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0910 20:55:05.436749       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 20:55:31.540924       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0910 20:55:31.540945       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0910 20:55:31.540982       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0910 20:55:31.541113       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-09-10 20:54:12 UTC, ends at Sun 2023-09-10 20:57:01 UTC. --
	Sep 10 20:56:44 functional-765000 kubelet[7145]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 10 20:56:44 functional-765000 kubelet[7145]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 10 20:56:44 functional-765000 kubelet[7145]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 10 20:56:44 functional-765000 kubelet[7145]: I0910 20:56:44.387539    7145 scope.go:117] "RemoveContainer" containerID="ac2c7a43196df76b8806304b596c5d528bc214ad3e821248eecc786df7339286"
	Sep 10 20:56:46 functional-765000 kubelet[7145]: I0910 20:56:46.302632    7145 scope.go:117] "RemoveContainer" containerID="8a96dfd9f19f605375fd59ed469de8eff405b96153941829997ae71900953bb8"
	Sep 10 20:56:46 functional-765000 kubelet[7145]: E0910 20:56:46.302760    7145 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-msrgv_default(04719a35-de65-4ee6-9fa2-d410766df0a2)\"" pod="default/hello-node-759d89bdcc-msrgv" podUID="04719a35-de65-4ee6-9fa2-d410766df0a2"
	Sep 10 20:56:46 functional-765000 kubelet[7145]: I0910 20:56:46.309353    7145 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.595518013 podCreationTimestamp="2023-09-10 20:56:43 +0000 UTC" firstStartedPulling="2023-09-10 20:56:44.343770238 +0000 UTC m=+60.113453149" lastFinishedPulling="2023-09-10 20:56:45.057581166 +0000 UTC m=+60.827264078" observedRunningTime="2023-09-10 20:56:45.793945312 +0000 UTC m=+61.563628224" watchObservedRunningTime="2023-09-10 20:56:46.309328942 +0000 UTC m=+62.079011854"
	Sep 10 20:56:49 functional-765000 kubelet[7145]: I0910 20:56:49.301454    7145 scope.go:117] "RemoveContainer" containerID="b43e9e6632457bc4f49e8069786ebcb0690a9cbfbb0183dde396bccbf6552650"
	Sep 10 20:56:49 functional-765000 kubelet[7145]: I0910 20:56:49.814772    7145 scope.go:117] "RemoveContainer" containerID="b43e9e6632457bc4f49e8069786ebcb0690a9cbfbb0183dde396bccbf6552650"
	Sep 10 20:56:49 functional-765000 kubelet[7145]: I0910 20:56:49.814897    7145 scope.go:117] "RemoveContainer" containerID="35f965bbc3eac4288bb0174e8c8de9199a0dec7cfdc3fb24cc885a381b26710f"
	Sep 10 20:56:49 functional-765000 kubelet[7145]: E0910 20:56:49.814989    7145 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-msvtv_default(b00805a2-299c-484e-b7c6-ee53ad911ca3)\"" pod="default/hello-node-connect-7799dfb7c6-msvtv" podUID="b00805a2-299c-484e-b7c6-ee53ad911ca3"
	Sep 10 20:56:52 functional-765000 kubelet[7145]: I0910 20:56:52.527033    7145 topology_manager.go:215] "Topology Admit Handler" podUID="6e3332bb-c39a-44c6-a9d8-5093d86f2520" podNamespace="default" podName="busybox-mount"
	Sep 10 20:56:52 functional-765000 kubelet[7145]: I0910 20:56:52.643609    7145 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/6e3332bb-c39a-44c6-a9d8-5093d86f2520-test-volume\") pod \"busybox-mount\" (UID: \"6e3332bb-c39a-44c6-a9d8-5093d86f2520\") " pod="default/busybox-mount"
	Sep 10 20:56:52 functional-765000 kubelet[7145]: I0910 20:56:52.643645    7145 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7m2\" (UniqueName: \"kubernetes.io/projected/6e3332bb-c39a-44c6-a9d8-5093d86f2520-kube-api-access-fz7m2\") pod \"busybox-mount\" (UID: \"6e3332bb-c39a-44c6-a9d8-5093d86f2520\") " pod="default/busybox-mount"
	Sep 10 20:56:55 functional-765000 kubelet[7145]: I0910 20:56:55.962967    7145 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/6e3332bb-c39a-44c6-a9d8-5093d86f2520-test-volume\") pod \"6e3332bb-c39a-44c6-a9d8-5093d86f2520\" (UID: \"6e3332bb-c39a-44c6-a9d8-5093d86f2520\") "
	Sep 10 20:56:55 functional-765000 kubelet[7145]: I0910 20:56:55.962990    7145 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7m2\" (UniqueName: \"kubernetes.io/projected/6e3332bb-c39a-44c6-a9d8-5093d86f2520-kube-api-access-fz7m2\") pod \"6e3332bb-c39a-44c6-a9d8-5093d86f2520\" (UID: \"6e3332bb-c39a-44c6-a9d8-5093d86f2520\") "
	Sep 10 20:56:55 functional-765000 kubelet[7145]: I0910 20:56:55.962995    7145 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e3332bb-c39a-44c6-a9d8-5093d86f2520-test-volume" (OuterVolumeSpecName: "test-volume") pod "6e3332bb-c39a-44c6-a9d8-5093d86f2520" (UID: "6e3332bb-c39a-44c6-a9d8-5093d86f2520"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 10 20:56:55 functional-765000 kubelet[7145]: I0910 20:56:55.964106    7145 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3332bb-c39a-44c6-a9d8-5093d86f2520-kube-api-access-fz7m2" (OuterVolumeSpecName: "kube-api-access-fz7m2") pod "6e3332bb-c39a-44c6-a9d8-5093d86f2520" (UID: "6e3332bb-c39a-44c6-a9d8-5093d86f2520"). InnerVolumeSpecName "kube-api-access-fz7m2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 10 20:56:56 functional-765000 kubelet[7145]: I0910 20:56:56.063282    7145 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fz7m2\" (UniqueName: \"kubernetes.io/projected/6e3332bb-c39a-44c6-a9d8-5093d86f2520-kube-api-access-fz7m2\") on node \"functional-765000\" DevicePath \"\""
	Sep 10 20:56:56 functional-765000 kubelet[7145]: I0910 20:56:56.063298    7145 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/6e3332bb-c39a-44c6-a9d8-5093d86f2520-test-volume\") on node \"functional-765000\" DevicePath \"\""
	Sep 10 20:56:56 functional-765000 kubelet[7145]: I0910 20:56:56.854537    7145 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77913ddb191d2f9dc5accfb385192d87a4fc79221b9bc9390323b0941a9b6ee7"
	Sep 10 20:57:00 functional-765000 kubelet[7145]: I0910 20:57:00.301485    7145 scope.go:117] "RemoveContainer" containerID="8a96dfd9f19f605375fd59ed469de8eff405b96153941829997ae71900953bb8"
	Sep 10 20:57:00 functional-765000 kubelet[7145]: I0910 20:57:00.875376    7145 scope.go:117] "RemoveContainer" containerID="8a96dfd9f19f605375fd59ed469de8eff405b96153941829997ae71900953bb8"
	Sep 10 20:57:00 functional-765000 kubelet[7145]: I0910 20:57:00.875539    7145 scope.go:117] "RemoveContainer" containerID="ca01269826e416a538a46844a458464ad73531598ec30d11e99bfacbc2b6720b"
	Sep 10 20:57:00 functional-765000 kubelet[7145]: E0910 20:57:00.875621    7145 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-msrgv_default(04719a35-de65-4ee6-9fa2-d410766df0a2)\"" pod="default/hello-node-759d89bdcc-msrgv" podUID="04719a35-de65-4ee6-9fa2-d410766df0a2"
	
	* 
	* ==> storage-provisioner [2ca38be959a5] <==
	* I0910 20:55:48.941948       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 20:55:48.945753       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 20:55:48.945772       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 20:56:06.342989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 20:56:06.343135       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-765000_35d4dc93-90f1-475f-9110-79ed849e984c!
	I0910 20:56:06.343471       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12d7104a-f264-4dcc-94d2-b31bcdf33138", APIVersion:"v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-765000_35d4dc93-90f1-475f-9110-79ed849e984c became leader
	I0910 20:56:06.443979       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-765000_35d4dc93-90f1-475f-9110-79ed849e984c!
	I0910 20:56:30.616186       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0910 20:56:30.616630       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5a446e30-7543-48b2-b39f-ba4b77e75e1b", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0910 20:56:30.616306       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    23a091ee-b693-47c5-8806-e827a28d38b5 349 0 2023-09-10 20:54:43 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-09-10 20:54:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-5a446e30-7543-48b2-b39f-ba4b77e75e1b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  5a446e30-7543-48b2-b39f-ba4b77e75e1b 675 0 2023-09-10 20:56:30 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-09-10 20:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-09-10 20:56:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0910 20:56:30.617345       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-5a446e30-7543-48b2-b39f-ba4b77e75e1b" provisioned
	I0910 20:56:30.617370       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0910 20:56:30.617390       1 volume_store.go:212] Trying to save persistentvolume "pvc-5a446e30-7543-48b2-b39f-ba4b77e75e1b"
	I0910 20:56:30.624377       1 volume_store.go:219] persistentvolume "pvc-5a446e30-7543-48b2-b39f-ba4b77e75e1b" saved
	I0910 20:56:30.624629       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5a446e30-7543-48b2-b39f-ba4b77e75e1b", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5a446e30-7543-48b2-b39f-ba4b77e75e1b
	
	* 
	* ==> storage-provisioner [f974a239353b] <==
	* I0910 20:55:03.462403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 20:55:05.339287       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 20:55:05.339576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 20:55:22.737166       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 20:55:22.737191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12d7104a-f264-4dcc-94d2-b31bcdf33138", APIVersion:"v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-765000_538ace5e-e4c3-4515-8588-ad588171ba57 became leader
	I0910 20:55:22.737321       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-765000_538ace5e-e4c3-4515-8588-ad588171ba57!
	I0910 20:55:22.837363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-765000_538ace5e-e4c3-4515-8588-ad588171ba57!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-765000 -n functional-765000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-765000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-765000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-765000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-765000/192.168.105.4
	Start Time:       Sun, 10 Sep 2023 13:56:52 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://fd179d10411129915c4bea8e5bf2fdde7d25ea382e8e8a4e171c64f5b8ad2178
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 10 Sep 2023 13:56:54 -0700
	      Finished:     Sun, 10 Sep 2023 13:56:54 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fz7m2 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fz7m2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/busybox-mount to functional-765000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.123s (1.123s including waiting)
	  Normal  Created    7s    kubelet            Created container mount-munger
	  Normal  Started    7s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (28.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0910 13:56:22.828798    2727 out.go:296] Setting OutFile to fd 1 ...
I0910 13:56:22.828903    2727 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:56:22.828905    2727 out.go:309] Setting ErrFile to fd 2...
I0910 13:56:22.828907    2727 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:56:22.829016    2727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
I0910 13:56:22.829237    2727 mustload.go:65] Loading cluster: functional-765000
I0910 13:56:22.829433    2727 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:56:22.834065    2727 out.go:177] 
W0910 13:56:22.838164    2727 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/monitor: connect: connection refused
W0910 13:56:22.838171    2727 out.go:239] * 
* 
W0910 13:56:22.839472    2727 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0910 13:56:22.842127    2727 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2728: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-050000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-050000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 23fd9c2627e8
	Removing intermediate container 23fd9c2627e8
	 ---> ad03047f2bbe
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in cbf987fc9344
	Removing intermediate container cbf987fc9344
	 ---> 8b4321597cb2
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in cb0f195862a1
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-050000 -n image-050000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-050000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-765000                     | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -p functional-765000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -T /mount1                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -T /mount2                               |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh findmnt            | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | -T /mount3                               |                   |         |         |                     |                     |
	| update-context | functional-765000                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-765000                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-765000                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-765000 ssh pgrep              | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-765000 image build -t         | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | localhost/my-image:functional-765000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-765000 image ls               | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	| delete         | -p functional-765000                     | functional-765000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	| start          | -p image-050000 --driver=qemu2           | image-050000      | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-050000      | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-050000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-050000      | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-050000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/10 13:57:13
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 13:57:13.489168    2969 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:57:13.489298    2969 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:13.489299    2969 out.go:309] Setting ErrFile to fd 2...
	I0910 13:57:13.489301    2969 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:13.489428    2969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 13:57:13.490453    2969 out.go:303] Setting JSON to false
	I0910 13:57:13.505944    2969 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1608,"bootTime":1694377825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:57:13.506009    2969 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:57:13.509381    2969 out.go:177] * [image-050000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:57:13.517376    2969 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 13:57:13.517402    2969 notify.go:220] Checking for updates...
	I0910 13:57:13.525405    2969 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:57:13.528449    2969 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:57:13.531408    2969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:57:13.535378    2969 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 13:57:13.538459    2969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 13:57:13.541506    2969 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:57:13.545410    2969 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 13:57:13.552398    2969 start.go:298] selected driver: qemu2
	I0910 13:57:13.552402    2969 start.go:902] validating driver "qemu2" against <nil>
	I0910 13:57:13.552410    2969 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 13:57:13.552529    2969 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 13:57:13.555391    2969 out.go:177] * Automatically selected the socket_vmnet network
	I0910 13:57:13.560565    2969 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0910 13:57:13.560658    2969 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 13:57:13.560677    2969 cni.go:84] Creating CNI manager for ""
	I0910 13:57:13.560684    2969 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:57:13.560688    2969 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 13:57:13.560695    2969 start_flags.go:321] config:
	{Name:image-050000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:image-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:57:13.564906    2969 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 13:57:13.571359    2969 out.go:177] * Starting control plane node image-050000 in cluster image-050000
	I0910 13:57:13.575184    2969 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:57:13.575199    2969 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 13:57:13.575215    2969 cache.go:57] Caching tarball of preloaded images
	I0910 13:57:13.575275    2969 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 13:57:13.575279    2969 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 13:57:13.575459    2969 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/config.json ...
	I0910 13:57:13.575473    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/config.json: {Name:mk43a063386166e4720926ac4ebc26fecff37589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:13.575676    2969 start.go:365] acquiring machines lock for image-050000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 13:57:13.575706    2969 start.go:369] acquired machines lock for "image-050000" in 26.583µs
	I0910 13:57:13.575715    2969 start.go:93] Provisioning new machine with config: &{Name:image-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:image-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 13:57:13.575741    2969 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 13:57:13.583239    2969 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0910 13:57:13.604785    2969 start.go:159] libmachine.API.Create for "image-050000" (driver="qemu2")
	I0910 13:57:13.604807    2969 client.go:168] LocalClient.Create starting
	I0910 13:57:13.604869    2969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 13:57:13.604897    2969 main.go:141] libmachine: Decoding PEM data...
	I0910 13:57:13.604908    2969 main.go:141] libmachine: Parsing certificate...
	I0910 13:57:13.604944    2969 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 13:57:13.604960    2969 main.go:141] libmachine: Decoding PEM data...
	I0910 13:57:13.604967    2969 main.go:141] libmachine: Parsing certificate...
	I0910 13:57:13.605262    2969 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 13:57:13.952194    2969 main.go:141] libmachine: Creating SSH key...
	I0910 13:57:14.072939    2969 main.go:141] libmachine: Creating Disk image...
	I0910 13:57:14.072943    2969 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 13:57:14.073092    2969 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/disk.qcow2
	I0910 13:57:14.098484    2969 main.go:141] libmachine: STDOUT: 
	I0910 13:57:14.098503    2969 main.go:141] libmachine: STDERR: 
	I0910 13:57:14.098565    2969 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/disk.qcow2 +20000M
	I0910 13:57:14.106081    2969 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 13:57:14.106091    2969 main.go:141] libmachine: STDERR: 
	I0910 13:57:14.106115    2969 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/disk.qcow2
	I0910 13:57:14.106118    2969 main.go:141] libmachine: Starting QEMU VM...
	I0910 13:57:14.106159    2969 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:ea:4c:ec:aa:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/disk.qcow2
	I0910 13:57:14.161313    2969 main.go:141] libmachine: STDOUT: 
	I0910 13:57:14.161328    2969 main.go:141] libmachine: STDERR: 
	I0910 13:57:14.161331    2969 main.go:141] libmachine: Attempt 0
	I0910 13:57:14.161338    2969 main.go:141] libmachine: Searching for 9a:ea:4c:ec:aa:c9 in /var/db/dhcpd_leases ...
	I0910 13:57:14.161421    2969 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0910 13:57:14.161440    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:14.161451    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:14.161456    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:16.163639    2969 main.go:141] libmachine: Attempt 1
	I0910 13:57:16.163685    2969 main.go:141] libmachine: Searching for 9a:ea:4c:ec:aa:c9 in /var/db/dhcpd_leases ...
	I0910 13:57:16.163958    2969 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0910 13:57:16.164003    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:16.164030    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:16.164086    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:18.166210    2969 main.go:141] libmachine: Attempt 2
	I0910 13:57:18.166225    2969 main.go:141] libmachine: Searching for 9a:ea:4c:ec:aa:c9 in /var/db/dhcpd_leases ...
	I0910 13:57:18.166316    2969 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0910 13:57:18.166327    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:18.166332    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:18.166336    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:20.168426    2969 main.go:141] libmachine: Attempt 3
	I0910 13:57:20.168478    2969 main.go:141] libmachine: Searching for 9a:ea:4c:ec:aa:c9 in /var/db/dhcpd_leases ...
	I0910 13:57:20.168542    2969 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0910 13:57:20.168547    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:20.168557    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:20.168562    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:22.170593    2969 main.go:141] libmachine: Attempt 4
	I0910 13:57:22.170620    2969 main.go:141] libmachine: Searching for 9a:ea:4c:ec:aa:c9 in /var/db/dhcpd_leases ...
	I0910 13:57:22.170723    2969 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0910 13:57:22.170734    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:22.170738    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:22.170742    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:24.172807    2969 main.go:141] libmachine: Attempt 5
	I0910 13:57:24.172822    2969 main.go:141] libmachine: Searching for 9a:ea:4c:ec:aa:c9 in /var/db/dhcpd_leases ...
	I0910 13:57:24.172896    2969 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0910 13:57:24.172904    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:24.172908    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:24.172913    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:26.174976    2969 main.go:141] libmachine: Attempt 6
	I0910 13:57:26.175027    2969 main.go:141] libmachine: Searching for 9a:ea:4c:ec:aa:c9 in /var/db/dhcpd_leases ...
	I0910 13:57:26.175177    2969 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0910 13:57:26.175190    2969 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:9a:ea:4c:ec:aa:c9 ID:1,9a:ea:4c:ec:aa:c9 Lease:0x64ff7f35}
	I0910 13:57:26.175193    2969 main.go:141] libmachine: Found match: 9a:ea:4c:ec:aa:c9
	I0910 13:57:26.175208    2969 main.go:141] libmachine: IP: 192.168.105.5
	I0910 13:57:26.175213    2969 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0910 13:57:28.186513    2969 machine.go:88] provisioning docker machine ...
	I0910 13:57:28.186534    2969 buildroot.go:166] provisioning hostname "image-050000"
	I0910 13:57:28.186608    2969 main.go:141] libmachine: Using SSH client type: native
	I0910 13:57:28.187045    2969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048763b0] 0x104878e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0910 13:57:28.187052    2969 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-050000 && echo "image-050000" | sudo tee /etc/hostname
	I0910 13:57:28.276469    2969 main.go:141] libmachine: SSH cmd err, output: <nil>: image-050000
	
	I0910 13:57:28.276524    2969 main.go:141] libmachine: Using SSH client type: native
	I0910 13:57:28.276869    2969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048763b0] 0x104878e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0910 13:57:28.276878    2969 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-050000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-050000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-050000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 13:57:28.359785    2969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 13:57:28.359793    2969 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17207-1093/.minikube CaCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17207-1093/.minikube}
	I0910 13:57:28.359805    2969 buildroot.go:174] setting up certificates
	I0910 13:57:28.359809    2969 provision.go:83] configureAuth start
	I0910 13:57:28.359815    2969 provision.go:138] copyHostCerts
	I0910 13:57:28.359894    2969 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem, removing ...
	I0910 13:57:28.359899    2969 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem
	I0910 13:57:28.360036    2969 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem (1123 bytes)
	I0910 13:57:28.360237    2969 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem, removing ...
	I0910 13:57:28.360239    2969 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem
	I0910 13:57:28.360297    2969 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem (1675 bytes)
	I0910 13:57:28.360407    2969 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem, removing ...
	I0910 13:57:28.360409    2969 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem
	I0910 13:57:28.360460    2969 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem (1078 bytes)
	I0910 13:57:28.360549    2969 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem org=jenkins.image-050000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-050000]
	I0910 13:57:28.415689    2969 provision.go:172] copyRemoteCerts
	I0910 13:57:28.415726    2969 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 13:57:28.415730    2969 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/id_rsa Username:docker}
	I0910 13:57:28.454029    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 13:57:28.461483    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0910 13:57:28.468705    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 13:57:28.475442    2969 provision.go:86] duration metric: configureAuth took 115.626084ms
	I0910 13:57:28.475447    2969 buildroot.go:189] setting minikube options for container-runtime
	I0910 13:57:28.475554    2969 config.go:182] Loaded profile config "image-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:57:28.475593    2969 main.go:141] libmachine: Using SSH client type: native
	I0910 13:57:28.475807    2969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048763b0] 0x104878e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0910 13:57:28.475810    2969 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 13:57:28.546950    2969 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 13:57:28.546958    2969 buildroot.go:70] root file system type: tmpfs
	I0910 13:57:28.547018    2969 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 13:57:28.547064    2969 main.go:141] libmachine: Using SSH client type: native
	I0910 13:57:28.547304    2969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048763b0] 0x104878e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0910 13:57:28.547344    2969 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 13:57:28.623064    2969 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 13:57:28.623109    2969 main.go:141] libmachine: Using SSH client type: native
	I0910 13:57:28.623373    2969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048763b0] 0x104878e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0910 13:57:28.623381    2969 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 13:57:28.972212    2969 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 13:57:28.972221    2969 machine.go:91] provisioned docker machine in 785.7065ms
	I0910 13:57:28.972225    2969 client.go:171] LocalClient.Create took 15.367579292s
	I0910 13:57:28.972242    2969 start.go:167] duration metric: libmachine.API.Create for "image-050000" took 15.367623541s
	I0910 13:57:28.972246    2969 start.go:300] post-start starting for "image-050000" (driver="qemu2")
	I0910 13:57:28.972250    2969 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 13:57:28.972316    2969 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 13:57:28.972323    2969 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/id_rsa Username:docker}
	I0910 13:57:29.010962    2969 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 13:57:29.012413    2969 info.go:137] Remote host: Buildroot 2021.02.12
	I0910 13:57:29.012420    2969 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/addons for local assets ...
	I0910 13:57:29.012488    2969 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/files for local assets ...
	I0910 13:57:29.012591    2969 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem -> 22002.pem in /etc/ssl/certs
	I0910 13:57:29.012703    2969 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 13:57:29.015239    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem --> /etc/ssl/certs/22002.pem (1708 bytes)
	I0910 13:57:29.022314    2969 start.go:303] post-start completed in 50.064959ms
	I0910 13:57:29.022636    2969 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/config.json ...
	I0910 13:57:29.022785    2969 start.go:128] duration metric: createHost completed in 15.447205416s
	I0910 13:57:29.022807    2969 main.go:141] libmachine: Using SSH client type: native
	I0910 13:57:29.023030    2969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1048763b0] 0x104878e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0910 13:57:29.023038    2969 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0910 13:57:29.097355    2969 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694379449.520259544
	
	I0910 13:57:29.097360    2969 fix.go:206] guest clock: 1694379449.520259544
	I0910 13:57:29.097363    2969 fix.go:219] Guest: 2023-09-10 13:57:29.520259544 -0700 PDT Remote: 2023-09-10 13:57:29.022786 -0700 PDT m=+15.553875876 (delta=497.473544ms)
	I0910 13:57:29.097376    2969 fix.go:190] guest clock delta is within tolerance: 497.473544ms
	I0910 13:57:29.097378    2969 start.go:83] releasing machines lock for "image-050000", held for 15.521833083s
	I0910 13:57:29.097653    2969 ssh_runner.go:195] Run: cat /version.json
	I0910 13:57:29.097653    2969 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 13:57:29.097658    2969 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/id_rsa Username:docker}
	I0910 13:57:29.097671    2969 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/id_rsa Username:docker}
	I0910 13:57:29.180681    2969 ssh_runner.go:195] Run: systemctl --version
	I0910 13:57:29.182902    2969 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 13:57:29.184857    2969 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 13:57:29.184894    2969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0910 13:57:29.190407    2969 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 13:57:29.190411    2969 start.go:466] detecting cgroup driver to use...
	I0910 13:57:29.190480    2969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:57:29.196589    2969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0910 13:57:29.199651    2969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 13:57:29.202732    2969 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 13:57:29.202751    2969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 13:57:29.206068    2969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:57:29.209162    2969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 13:57:29.211813    2969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:57:29.214770    2969 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 13:57:29.218080    2969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 13:57:29.221311    2969 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 13:57:29.223918    2969 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 13:57:29.226721    2969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:57:29.290637    2969 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 13:57:29.299247    2969 start.go:466] detecting cgroup driver to use...
	I0910 13:57:29.299308    2969 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 13:57:29.306292    2969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:57:29.311215    2969 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 13:57:29.316945    2969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:57:29.321656    2969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 13:57:29.326364    2969 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 13:57:29.365774    2969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 13:57:29.371102    2969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:57:29.376627    2969 ssh_runner.go:195] Run: which cri-dockerd
	I0910 13:57:29.377833    2969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 13:57:29.380770    2969 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0910 13:57:29.385629    2969 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 13:57:29.442569    2969 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 13:57:29.514580    2969 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 13:57:29.514590    2969 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0910 13:57:29.521211    2969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:57:29.584516    2969 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 13:57:30.750345    2969 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.165827708s)
	I0910 13:57:30.750393    2969 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 13:57:30.811250    2969 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0910 13:57:30.865674    2969 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0910 13:57:30.930214    2969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:57:30.993599    2969 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0910 13:57:31.000791    2969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:57:31.067371    2969 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0910 13:57:31.091036    2969 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0910 13:57:31.091107    2969 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0910 13:57:31.093442    2969 start.go:534] Will wait 60s for crictl version
	I0910 13:57:31.093481    2969 ssh_runner.go:195] Run: which crictl
	I0910 13:57:31.094911    2969 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0910 13:57:31.114539    2969 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1alpha2
	I0910 13:57:31.114606    2969 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:57:31.124222    2969 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:57:31.140468    2969 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.6 ...
	I0910 13:57:31.140607    2969 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0910 13:57:31.142090    2969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 13:57:31.146226    2969 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:57:31.146268    2969 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:57:31.151838    2969 docker.go:636] Got preloaded images: 
	I0910 13:57:31.151842    2969 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0910 13:57:31.151882    2969 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 13:57:31.154881    2969 ssh_runner.go:195] Run: which lz4
	I0910 13:57:31.156264    2969 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0910 13:57:31.157516    2969 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 13:57:31.157527    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0910 13:57:32.471853    2969 docker.go:600] Took 1.315643 seconds to copy over tarball
	I0910 13:57:32.471907    2969 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 13:57:33.509009    2969 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.037099875s)
	I0910 13:57:33.509017    2969 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 13:57:33.524583    2969 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 13:57:33.527438    2969 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0910 13:57:33.532345    2969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:57:33.602981    2969 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 13:57:35.080964    2969 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.477985584s)
	I0910 13:57:35.081051    2969 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:57:35.087156    2969 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0910 13:57:35.087164    2969 cache_images.go:84] Images are preloaded, skipping loading
	I0910 13:57:35.087218    2969 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 13:57:35.094629    2969 cni.go:84] Creating CNI manager for ""
	I0910 13:57:35.094634    2969 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:57:35.094643    2969 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0910 13:57:35.094651    2969 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-050000 NodeName:image-050000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0910 13:57:35.094723    2969 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-050000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 13:57:35.094756    2969 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-050000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:image-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0910 13:57:35.094811    2969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0910 13:57:35.098380    2969 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 13:57:35.098401    2969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 13:57:35.101651    2969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0910 13:57:35.106870    2969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0910 13:57:35.112148    2969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0910 13:57:35.116746    2969 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0910 13:57:35.118006    2969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 13:57:35.121932    2969 certs.go:56] Setting up /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000 for IP: 192.168.105.5
	I0910 13:57:35.121949    2969 certs.go:190] acquiring lock for shared ca certs: {Name:mk28134b321cd562735798fd2fcb10a58019fa5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:35.122075    2969 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key
	I0910 13:57:35.122112    2969 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key
	I0910 13:57:35.122135    2969 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/client.key
	I0910 13:57:35.122141    2969 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/client.crt with IP's: []
	I0910 13:57:35.161576    2969 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/client.crt ...
	I0910 13:57:35.161579    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/client.crt: {Name:mk7aa3b891ce74bdcce6e310a9a3f26c989aa392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:35.161808    2969 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/client.key ...
	I0910 13:57:35.161810    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/client.key: {Name:mk6f870aa8658bbe4a55f67ee68aecbb20dafad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:35.161926    2969 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.key.e69b33ca
	I0910 13:57:35.161934    2969 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0910 13:57:35.337950    2969 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.crt.e69b33ca ...
	I0910 13:57:35.337956    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.crt.e69b33ca: {Name:mk4c5981caee4ec610e6fc47d41ddeec788804ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:35.338193    2969 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.key.e69b33ca ...
	I0910 13:57:35.338195    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.key.e69b33ca: {Name:mk35bb82937383fa1753981aa98698c63373e2c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:35.338309    2969 certs.go:337] copying /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.crt
	I0910 13:57:35.338568    2969 certs.go:341] copying /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.key
	I0910 13:57:35.338699    2969 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.key
	I0910 13:57:35.338705    2969 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.crt with IP's: []
	I0910 13:57:35.424229    2969 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.crt ...
	I0910 13:57:35.424231    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.crt: {Name:mka919d4e63275cdb56989cadb14f391fceb4e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:35.424399    2969 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.key ...
	I0910 13:57:35.424401    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.key: {Name:mkb51fc5006728bfa979f6292e1572a723f98008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:35.424642    2969 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200.pem (1338 bytes)
	W0910 13:57:35.424667    2969 certs.go:433] ignoring /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200_empty.pem, impossibly tiny 0 bytes
	I0910 13:57:35.424672    2969 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 13:57:35.424690    2969 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem (1078 bytes)
	I0910 13:57:35.424707    2969 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem (1123 bytes)
	I0910 13:57:35.424723    2969 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem (1675 bytes)
	I0910 13:57:35.424765    2969 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem (1708 bytes)
	I0910 13:57:35.425060    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0910 13:57:35.433079    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 13:57:35.439917    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 13:57:35.446695    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/image-050000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0910 13:57:35.453897    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 13:57:35.460977    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 13:57:35.467720    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 13:57:35.474591    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 13:57:35.481612    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200.pem --> /usr/share/ca-certificates/2200.pem (1338 bytes)
	I0910 13:57:35.488790    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem --> /usr/share/ca-certificates/22002.pem (1708 bytes)
	I0910 13:57:35.495568    2969 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 13:57:35.502299    2969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 13:57:35.507507    2969 ssh_runner.go:195] Run: openssl version
	I0910 13:57:35.509565    2969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2200.pem && ln -fs /usr/share/ca-certificates/2200.pem /etc/ssl/certs/2200.pem"
	I0910 13:57:35.512738    2969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2200.pem
	I0910 13:57:35.514131    2969 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 10 20:54 /usr/share/ca-certificates/2200.pem
	I0910 13:57:35.514148    2969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2200.pem
	I0910 13:57:35.516107    2969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2200.pem /etc/ssl/certs/51391683.0"
	I0910 13:57:35.519140    2969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22002.pem && ln -fs /usr/share/ca-certificates/22002.pem /etc/ssl/certs/22002.pem"
	I0910 13:57:35.522679    2969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22002.pem
	I0910 13:57:35.524292    2969 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 10 20:54 /usr/share/ca-certificates/22002.pem
	I0910 13:57:35.524313    2969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22002.pem
	I0910 13:57:35.526174    2969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22002.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 13:57:35.529641    2969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 13:57:35.532852    2969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:57:35.534363    2969 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 10 20:53 /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:57:35.534383    2969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:57:35.536313    2969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 13:57:35.539271    2969 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0910 13:57:35.540690    2969 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0910 13:57:35.540721    2969 kubeadm.go:404] StartCluster: {Name:image-050000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:image-050000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:57:35.540792    2969 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 13:57:35.546087    2969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 13:57:35.549457    2969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 13:57:35.552248    2969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 13:57:35.555004    2969 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 13:57:35.555015    2969 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0910 13:57:35.576622    2969 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0910 13:57:35.576644    2969 kubeadm.go:322] [preflight] Running pre-flight checks
	I0910 13:57:35.630441    2969 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 13:57:35.630489    2969 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 13:57:35.630545    2969 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 13:57:35.692069    2969 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 13:57:35.702045    2969 out.go:204]   - Generating certificates and keys ...
	I0910 13:57:35.702089    2969 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0910 13:57:35.702119    2969 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0910 13:57:35.726864    2969 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 13:57:35.891437    2969 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0910 13:57:35.932738    2969 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0910 13:57:36.069622    2969 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0910 13:57:36.206950    2969 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0910 13:57:36.207012    2969 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-050000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0910 13:57:36.474776    2969 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0910 13:57:36.474844    2969 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-050000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0910 13:57:36.568222    2969 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 13:57:36.638280    2969 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 13:57:36.662112    2969 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0910 13:57:36.662142    2969 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 13:57:36.844439    2969 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 13:57:36.913495    2969 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 13:57:37.067507    2969 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 13:57:37.137333    2969 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 13:57:37.137658    2969 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 13:57:37.138793    2969 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 13:57:37.147039    2969 out.go:204]   - Booting up control plane ...
	I0910 13:57:37.147095    2969 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 13:57:37.147140    2969 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 13:57:37.147168    2969 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 13:57:37.147218    2969 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 13:57:37.147267    2969 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 13:57:37.147286    2969 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0910 13:57:37.206081    2969 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 13:57:41.207442    2969 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001509 seconds
	I0910 13:57:41.207499    2969 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 13:57:41.212864    2969 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 13:57:41.722569    2969 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 13:57:41.722670    2969 kubeadm.go:322] [mark-control-plane] Marking the node image-050000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0910 13:57:42.227518    2969 kubeadm.go:322] [bootstrap-token] Using token: ppytv1.qaxvnso54ae7bj2m
	I0910 13:57:42.233919    2969 out.go:204]   - Configuring RBAC rules ...
	I0910 13:57:42.233982    2969 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 13:57:42.234759    2969 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 13:57:42.238080    2969 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 13:57:42.239386    2969 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 13:57:42.240564    2969 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 13:57:42.242357    2969 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 13:57:42.246869    2969 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 13:57:42.397862    2969 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0910 13:57:42.636788    2969 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0910 13:57:42.637208    2969 kubeadm.go:322] 
	I0910 13:57:42.637238    2969 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0910 13:57:42.637239    2969 kubeadm.go:322] 
	I0910 13:57:42.637277    2969 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0910 13:57:42.637280    2969 kubeadm.go:322] 
	I0910 13:57:42.637290    2969 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0910 13:57:42.637362    2969 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 13:57:42.637405    2969 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 13:57:42.637407    2969 kubeadm.go:322] 
	I0910 13:57:42.637435    2969 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0910 13:57:42.637437    2969 kubeadm.go:322] 
	I0910 13:57:42.637467    2969 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0910 13:57:42.637469    2969 kubeadm.go:322] 
	I0910 13:57:42.637501    2969 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0910 13:57:42.637534    2969 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 13:57:42.637566    2969 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 13:57:42.637567    2969 kubeadm.go:322] 
	I0910 13:57:42.637615    2969 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 13:57:42.637653    2969 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0910 13:57:42.637654    2969 kubeadm.go:322] 
	I0910 13:57:42.637690    2969 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ppytv1.qaxvnso54ae7bj2m \
	I0910 13:57:42.637736    2969 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:10bd6c29805637182224d42f91d4bace622161cd91f1a9b0f464f3aed87a5ead \
	I0910 13:57:42.637746    2969 kubeadm.go:322] 	--control-plane 
	I0910 13:57:42.637748    2969 kubeadm.go:322] 
	I0910 13:57:42.637794    2969 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0910 13:57:42.637796    2969 kubeadm.go:322] 
	I0910 13:57:42.637832    2969 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ppytv1.qaxvnso54ae7bj2m \
	I0910 13:57:42.637910    2969 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:10bd6c29805637182224d42f91d4bace622161cd91f1a9b0f464f3aed87a5ead 
	I0910 13:57:42.637964    2969 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 13:57:42.637968    2969 cni.go:84] Creating CNI manager for ""
	I0910 13:57:42.637974    2969 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:57:42.644872    2969 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0910 13:57:42.648946    2969 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0910 13:57:42.652438    2969 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0910 13:57:42.657308    2969 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 13:57:42.657351    2969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:57:42.657376    2969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d731e1cec1979d094cdaebcdf1ed599ff8209767 minikube.k8s.io/name=image-050000 minikube.k8s.io/updated_at=2023_09_10T13_57_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:57:42.660569    2969 ops.go:34] apiserver oom_adj: -16
	I0910 13:57:42.721637    2969 kubeadm.go:1081] duration metric: took 64.31675ms to wait for elevateKubeSystemPrivileges.
	I0910 13:57:42.721644    2969 kubeadm.go:406] StartCluster complete in 7.181005417s
	I0910 13:57:42.721653    2969 settings.go:142] acquiring lock: {Name:mk5069f344fe5f68592bc6867db9aede10bc3fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:42.721731    2969 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:57:42.722074    2969 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/kubeconfig: {Name:mk7c70008fc2d1b0ba569659f9157708891e79a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:42.722263    2969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 13:57:42.722300    2969 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0910 13:57:42.722342    2969 addons.go:69] Setting storage-provisioner=true in profile "image-050000"
	I0910 13:57:42.722348    2969 addons.go:231] Setting addon storage-provisioner=true in "image-050000"
	I0910 13:57:42.722370    2969 host.go:66] Checking if "image-050000" exists ...
	I0910 13:57:42.722375    2969 config.go:182] Loaded profile config "image-050000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:57:42.722425    2969 addons.go:69] Setting default-storageclass=true in profile "image-050000"
	I0910 13:57:42.722433    2969 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-050000"
	W0910 13:57:42.722640    2969 host.go:54] host status for "image-050000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/monitor: connect: connection refused
	W0910 13:57:42.722646    2969 addons.go:277] "image-050000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0910 13:57:42.729341    2969 addons.go:231] Setting addon default-storageclass=true in "image-050000"
	I0910 13:57:42.729357    2969 host.go:66] Checking if "image-050000" exists ...
	I0910 13:57:42.730010    2969 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 13:57:42.730014    2969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 13:57:42.730020    2969 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/image-050000/id_rsa Username:docker}
	I0910 13:57:42.732385    2969 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-050000" context rescaled to 1 replicas
	I0910 13:57:42.732398    2969 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 13:57:42.735139    2969 out.go:177] * Verifying Kubernetes components...
	I0910 13:57:42.740092    2969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 13:57:42.765353    2969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 13:57:42.765718    2969 api_server.go:52] waiting for apiserver process to appear ...
	I0910 13:57:42.765744    2969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 13:57:42.775322    2969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 13:57:43.138335    2969 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0910 13:57:43.138360    2969 api_server.go:72] duration metric: took 405.95775ms to wait for apiserver process to appear ...
	I0910 13:57:43.138363    2969 api_server.go:88] waiting for apiserver healthz status ...
	I0910 13:57:43.138371    2969 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0910 13:57:43.144244    2969 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0910 13:57:43.141391    2969 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0910 13:57:43.148427    2969 addons.go:502] enable addons completed in 426.144625ms: enabled=[storage-provisioner default-storageclass]
	I0910 13:57:43.149064    2969 api_server.go:141] control plane version: v1.28.1
	I0910 13:57:43.149068    2969 api_server.go:131] duration metric: took 10.703ms to wait for apiserver health ...
	I0910 13:57:43.149074    2969 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 13:57:43.151960    2969 system_pods.go:59] 4 kube-system pods found
	I0910 13:57:43.151964    2969 system_pods.go:61] "etcd-image-050000" [63554ef1-83ff-4366-8d1c-d7f02796e98f] Pending
	I0910 13:57:43.151966    2969 system_pods.go:61] "kube-apiserver-image-050000" [7c468fc9-fdd5-4572-826e-7fbdc87a2c7a] Pending
	I0910 13:57:43.151968    2969 system_pods.go:61] "kube-controller-manager-image-050000" [9c14e305-11f3-4546-afae-d04e698c2a97] Pending
	I0910 13:57:43.151970    2969 system_pods.go:61] "kube-scheduler-image-050000" [cc80574d-c910-4332-9136-de0cb3705891] Pending
	I0910 13:57:43.151972    2969 system_pods.go:74] duration metric: took 2.896292ms to wait for pod list to return data ...
	I0910 13:57:43.151974    2969 kubeadm.go:581] duration metric: took 419.573208ms to wait for : map[apiserver:true system_pods:true] ...
	I0910 13:57:43.151979    2969 node_conditions.go:102] verifying NodePressure condition ...
	I0910 13:57:43.153377    2969 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0910 13:57:43.153396    2969 node_conditions.go:123] node cpu capacity is 2
	I0910 13:57:43.153409    2969 node_conditions.go:105] duration metric: took 1.426125ms to run NodePressure ...
	I0910 13:57:43.153419    2969 start.go:228] waiting for startup goroutines ...
	I0910 13:57:43.153426    2969 start.go:233] waiting for cluster config update ...
	I0910 13:57:43.153435    2969 start.go:242] writing updated cluster config ...
	I0910 13:57:43.154011    2969 ssh_runner.go:195] Run: rm -f paused
	I0910 13:57:43.181895    2969 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0910 13:57:43.186378    2969 out.go:177] * Done! kubectl is now configured to use "image-050000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-09-10 20:57:25 UTC, ends at Sun 2023-09-10 20:57:45 UTC. --
	Sep 10 20:57:38 image-050000 cri-dockerd[1006]: time="2023-09-10T20:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f8e82145d52526d647a041b9af3a1804c5b5496d3b495c808699d2f724a14f4/resolv.conf as [nameserver 192.168.105.1]"
	Sep 10 20:57:38 image-050000 cri-dockerd[1006]: time="2023-09-10T20:57:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91b51609c908fd2a2433fb4f79df50694ebc2946c49539f5fd438fd246fc94f5/resolv.conf as [nameserver 192.168.105.1]"
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.688540590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.688582048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.688595590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.688606548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.693154840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.693204465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.693342381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.693353340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.731299340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.731330923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.731459548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:57:38 image-050000 dockerd[1114]: time="2023-09-10T20:57:38.731469298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:44 image-050000 dockerd[1108]: time="2023-09-10T20:57:44.698064717Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 10 20:57:44 image-050000 dockerd[1108]: time="2023-09-10T20:57:44.823459884Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 10 20:57:44 image-050000 dockerd[1108]: time="2023-09-10T20:57:44.838980551Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 10 20:57:44 image-050000 dockerd[1114]: time="2023-09-10T20:57:44.885527468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 20:57:44 image-050000 dockerd[1114]: time="2023-09-10T20:57:44.885562468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:44 image-050000 dockerd[1114]: time="2023-09-10T20:57:44.885569009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:57:44 image-050000 dockerd[1114]: time="2023-09-10T20:57:44.885573384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:57:45 image-050000 dockerd[1108]: time="2023-09-10T20:57:45.021709734Z" level=info msg="ignoring event" container=cb0f195862a13bedaee68b3a5e09a31ee6234e2fa01cfa1ba663922bf63105df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 20:57:45 image-050000 dockerd[1114]: time="2023-09-10T20:57:45.021828617Z" level=info msg="shim disconnected" id=cb0f195862a13bedaee68b3a5e09a31ee6234e2fa01cfa1ba663922bf63105df namespace=moby
	Sep 10 20:57:45 image-050000 dockerd[1114]: time="2023-09-10T20:57:45.021857178Z" level=warning msg="cleaning up after shim disconnected" id=cb0f195862a13bedaee68b3a5e09a31ee6234e2fa01cfa1ba663922bf63105df namespace=moby
	Sep 10 20:57:45 image-050000 dockerd[1114]: time="2023-09-10T20:57:45.021861578Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b6ec37d6c188f       b4a5a57e99492       7 seconds ago       Running             kube-scheduler            0                   91b51609c908f
	9057378f7308a       b29fb62480892       7 seconds ago       Running             kube-apiserver            0                   f6d1b0c5495c0
	ef7cdbf6bc555       8b6e1980b7584       7 seconds ago       Running             kube-controller-manager   0                   5f8e82145d525
	b6a6beacabc08       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   3fd5e6e805f16
	
	* 
	* ==> describe nodes <==
	* Name:               image-050000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-050000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d731e1cec1979d094cdaebcdf1ed599ff8209767
	                    minikube.k8s.io/name=image-050000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_10T13_57_42_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 10 Sep 2023 20:57:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-050000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 10 Sep 2023 20:57:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 10 Sep 2023 20:57:42 +0000   Sun, 10 Sep 2023 20:57:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 10 Sep 2023 20:57:42 +0000   Sun, 10 Sep 2023 20:57:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 10 Sep 2023 20:57:42 +0000   Sun, 10 Sep 2023 20:57:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 10 Sep 2023 20:57:42 +0000   Sun, 10 Sep 2023 20:57:39 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-050000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd5c1c5f15b8471597c447e41673024e
	  System UUID:                fd5c1c5f15b8471597c447e41673024e
	  Boot ID:                    eda15ee8-c95c-4d50-b624-3d267c06673f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-050000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-050000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-050000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-image-050000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s (x8 over 8s)  kubelet  Node image-050000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 8s)  kubelet  Node image-050000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 8s)  kubelet  Node image-050000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-050000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-050000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-050000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep10 20:57] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.665989] EINJ: EINJ table not found.
	[  +0.508613] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043509] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000860] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.133099] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.069056] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.454954] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.152510] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[  +0.068865] systemd-fstab-generator[721]: Ignoring "noauto" for root device
	[  +0.067637] systemd-fstab-generator[764]: Ignoring "noauto" for root device
	[  +1.230796] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +0.053066] systemd-fstab-generator[934]: Ignoring "noauto" for root device
	[  +0.066401] systemd-fstab-generator[945]: Ignoring "noauto" for root device
	[  +0.064595] systemd-fstab-generator[956]: Ignoring "noauto" for root device
	[  +0.070237] systemd-fstab-generator[999]: Ignoring "noauto" for root device
	[  +2.536571] systemd-fstab-generator[1101]: Ignoring "noauto" for root device
	[  +1.460521] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.138170] systemd-fstab-generator[1434]: Ignoring "noauto" for root device
	[  +5.106515] systemd-fstab-generator[2342]: Ignoring "noauto" for root device
	[  +2.154809] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [b6a6beacabc0] <==
	* {"level":"info","ts":"2023-09-10T20:57:38.700515Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-10T20:57:38.700542Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-10T20:57:38.700546Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-10T20:57:38.700758Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-10T20:57:38.700768Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-10T20:57:38.701041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-10T20:57:38.701088Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-10T20:57:39.389271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-10T20:57:39.389344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-10T20:57:39.389377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-10T20:57:39.389398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-10T20:57:39.389429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-10T20:57:39.389449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-10T20:57:39.38947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-10T20:57:39.390283Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-050000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-10T20:57:39.390405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-10T20:57:39.390891Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-10T20:57:39.390983Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-10T20:57:39.391058Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-10T20:57:39.391267Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-10T20:57:39.391339Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-10T20:57:39.391361Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-10T20:57:39.393001Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-10T20:57:39.395415Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-10T20:57:39.398439Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  20:57:45 up 0 min,  0 users,  load average: 0.51, 0.11, 0.04
	Linux image-050000 5.10.57 #1 SMP PREEMPT Thu Sep 7 12:06:54 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9057378f7308] <==
	* I0910 20:57:40.068409       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 20:57:40.068856       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0910 20:57:40.068873       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0910 20:57:40.069119       1 shared_informer.go:318] Caches are synced for configmaps
	I0910 20:57:40.069468       1 controller.go:624] quota admission added evaluator for: namespaces
	I0910 20:57:40.087245       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0910 20:57:40.087366       1 aggregator.go:166] initial CRD sync complete...
	I0910 20:57:40.087390       1 autoregister_controller.go:141] Starting autoregister controller
	I0910 20:57:40.087408       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0910 20:57:40.087422       1 cache.go:39] Caches are synced for autoregister controller
	I0910 20:57:40.088383       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0910 20:57:40.261386       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 20:57:40.972345       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0910 20:57:40.973874       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0910 20:57:40.973880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0910 20:57:41.119635       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 20:57:41.129821       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0910 20:57:41.173879       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0910 20:57:41.183431       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0910 20:57:41.183844       1 controller.go:624] quota admission added evaluator for: endpoints
	I0910 20:57:41.185147       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 20:57:42.010831       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0910 20:57:42.816783       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0910 20:57:42.820588       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0910 20:57:42.824287       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [ef7cdbf6bc55] <==
	* I0910 20:57:43.011035       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0910 20:57:43.011039       1 shared_informer.go:311] Waiting for caches to sync for taint
	E0910 20:57:43.059465       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0910 20:57:43.059478       1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0910 20:57:43.211262       1 controllermanager.go:642] "Started controller" controller="clusterrole-aggregation-controller"
	I0910 20:57:43.211316       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0910 20:57:43.211322       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0910 20:57:43.361444       1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
	I0910 20:57:43.361522       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0910 20:57:43.361530       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0910 20:57:43.510558       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0910 20:57:43.510594       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0910 20:57:43.510599       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0910 20:57:43.660357       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I0910 20:57:43.660393       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0910 20:57:43.660400       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0910 20:57:43.811306       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0910 20:57:43.811347       1 job_controller.go:226] "Starting job controller"
	I0910 20:57:43.811354       1 shared_informer.go:311] Waiting for caches to sync for job
	I0910 20:57:43.960079       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0910 20:57:43.960138       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0910 20:57:43.960146       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0910 20:57:44.110496       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0910 20:57:44.110544       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0910 20:57:44.110552       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	
	* 
	* ==> kube-scheduler [b6ec37d6c188] <==
	* W0910 20:57:40.017124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 20:57:40.017141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0910 20:57:40.017182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 20:57:40.017200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0910 20:57:40.017252       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 20:57:40.017261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0910 20:57:40.017309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 20:57:40.017322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0910 20:57:40.017376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0910 20:57:40.017382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0910 20:57:40.017426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 20:57:40.017434       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0910 20:57:40.909139       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 20:57:40.909158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0910 20:57:41.008959       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 20:57:41.009004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0910 20:57:41.021162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 20:57:41.021177       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0910 20:57:41.021248       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 20:57:41.021262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0910 20:57:41.063549       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 20:57:41.063626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0910 20:57:41.072508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0910 20:57:41.072582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0910 20:57:41.612930       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-09-10 20:57:25 UTC, ends at Sun 2023-09-10 20:57:45 UTC. --
	Sep 10 20:57:42 image-050000 kubelet[2348]: I0910 20:57:42.973795    2348 kubelet_node_status.go:73] "Successfully registered node" node="image-050000"
	Sep 10 20:57:42 image-050000 kubelet[2348]: I0910 20:57:42.975506    2348 topology_manager.go:215] "Topology Admit Handler" podUID="2c302b6e5fe9a0750b0b7c522c744c83" podNamespace="kube-system" podName="etcd-image-050000"
	Sep 10 20:57:42 image-050000 kubelet[2348]: I0910 20:57:42.975562    2348 topology_manager.go:215] "Topology Admit Handler" podUID="9bd8e0ec4821b161f9bcba46df17184f" podNamespace="kube-system" podName="kube-apiserver-image-050000"
	Sep 10 20:57:42 image-050000 kubelet[2348]: I0910 20:57:42.975596    2348 topology_manager.go:215] "Topology Admit Handler" podUID="4335e2d9154ad1f8799189ea9174a049" podNamespace="kube-system" podName="kube-controller-manager-image-050000"
	Sep 10 20:57:42 image-050000 kubelet[2348]: I0910 20:57:42.975613    2348 topology_manager.go:215] "Topology Admit Handler" podUID="a71e55c787951e079541aa4da3b03c56" podNamespace="kube-system" podName="kube-scheduler-image-050000"
	Sep 10 20:57:42 image-050000 kubelet[2348]: E0910 20:57:42.980196    2348 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-050000\" already exists" pod="kube-system/kube-scheduler-image-050000"
	Sep 10 20:57:42 image-050000 kubelet[2348]: E0910 20:57:42.981225    2348 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-image-050000\" already exists" pod="kube-system/kube-controller-manager-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.167852    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/2c302b6e5fe9a0750b0b7c522c744c83-etcd-data\") pod \"etcd-image-050000\" (UID: \"2c302b6e5fe9a0750b0b7c522c744c83\") " pod="kube-system/etcd-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.167878    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bd8e0ec4821b161f9bcba46df17184f-k8s-certs\") pod \"kube-apiserver-image-050000\" (UID: \"9bd8e0ec4821b161f9bcba46df17184f\") " pod="kube-system/kube-apiserver-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.167890    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bd8e0ec4821b161f9bcba46df17184f-usr-share-ca-certificates\") pod \"kube-apiserver-image-050000\" (UID: \"9bd8e0ec4821b161f9bcba46df17184f\") " pod="kube-system/kube-apiserver-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.167949    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4335e2d9154ad1f8799189ea9174a049-k8s-certs\") pod \"kube-controller-manager-image-050000\" (UID: \"4335e2d9154ad1f8799189ea9174a049\") " pod="kube-system/kube-controller-manager-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.167960    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4335e2d9154ad1f8799189ea9174a049-usr-share-ca-certificates\") pod \"kube-controller-manager-image-050000\" (UID: \"4335e2d9154ad1f8799189ea9174a049\") " pod="kube-system/kube-controller-manager-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.167969    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a71e55c787951e079541aa4da3b03c56-kubeconfig\") pod \"kube-scheduler-image-050000\" (UID: \"a71e55c787951e079541aa4da3b03c56\") " pod="kube-system/kube-scheduler-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.168008    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/2c302b6e5fe9a0750b0b7c522c744c83-etcd-certs\") pod \"etcd-image-050000\" (UID: \"2c302b6e5fe9a0750b0b7c522c744c83\") " pod="kube-system/etcd-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.168020    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bd8e0ec4821b161f9bcba46df17184f-ca-certs\") pod \"kube-apiserver-image-050000\" (UID: \"9bd8e0ec4821b161f9bcba46df17184f\") " pod="kube-system/kube-apiserver-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.168030    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4335e2d9154ad1f8799189ea9174a049-ca-certs\") pod \"kube-controller-manager-image-050000\" (UID: \"4335e2d9154ad1f8799189ea9174a049\") " pod="kube-system/kube-controller-manager-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.168038    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4335e2d9154ad1f8799189ea9174a049-flexvolume-dir\") pod \"kube-controller-manager-image-050000\" (UID: \"4335e2d9154ad1f8799189ea9174a049\") " pod="kube-system/kube-controller-manager-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.168047    2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4335e2d9154ad1f8799189ea9174a049-kubeconfig\") pod \"kube-controller-manager-image-050000\" (UID: \"4335e2d9154ad1f8799189ea9174a049\") " pod="kube-system/kube-controller-manager-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.851841    2348 apiserver.go:52] "Watching apiserver"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.867223    2348 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 10 20:57:43 image-050000 kubelet[2348]: E0910 20:57:43.927593    2348 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-050000\" already exists" pod="kube-system/kube-apiserver-image-050000"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.935766    2348 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-050000" podStartSLOduration=2.93572505 podCreationTimestamp="2023-09-10 20:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-10 20:57:43.935604467 +0000 UTC m=+1.132312168" watchObservedRunningTime="2023-09-10 20:57:43.93572505 +0000 UTC m=+1.132432751"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.939262    2348 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-050000" podStartSLOduration=3.9392464670000003 podCreationTimestamp="2023-09-10 20:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-10 20:57:43.939170425 +0000 UTC m=+1.135878126" watchObservedRunningTime="2023-09-10 20:57:43.939246467 +0000 UTC m=+1.135954168"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.948620    2348 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-050000" podStartSLOduration=1.948145759 podCreationTimestamp="2023-09-10 20:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-10 20:57:43.942774509 +0000 UTC m=+1.139482209" watchObservedRunningTime="2023-09-10 20:57:43.948145759 +0000 UTC m=+1.144853418"
	Sep 10 20:57:43 image-050000 kubelet[2348]: I0910 20:57:43.948912    2348 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-050000" podStartSLOduration=1.948903217 podCreationTimestamp="2023-09-10 20:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-10 20:57:43.948017217 +0000 UTC m=+1.144724918" watchObservedRunningTime="2023-09-10 20:57:43.948903217 +0000 UTC m=+1.145610918"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-050000 -n image-050000
helpers_test.go:261: (dbg) Run:  kubectl --context image-050000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (57.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-065000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-065000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.874935916s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-065000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-065000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [239342be-bb47-4e0d-89b1-1089d16a02e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [239342be-bb47-4e0d-89b1-1089d16a02e0] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.011372458s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-065000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.039375667s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 addons disable ingress-dns --alsologtostderr -v=1: (7.436845833s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 addons disable ingress --alsologtostderr -v=1: (7.123759083s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-065000 -n ingress-addon-legacy-065000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-765000 ssh findmnt            | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | -T /mount3                               |                             |         |         |                     |                     |
	| update-context | functional-765000                        | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-765000                        | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-765000                        | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-765000 ssh pgrep              | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-765000 image build -t         | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | localhost/my-image:functional-765000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-765000                        | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-765000 image ls               | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	| delete         | -p functional-765000                     | functional-765000           | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	| start          | -p image-050000 --driver=qemu2           | image-050000                | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-050000                | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-050000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-050000                | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-050000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-050000                | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-050000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-050000                | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-050000                          |                             |         |         |                     |                     |
	| delete         | -p image-050000                          | image-050000                | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:57 PDT |
	| start          | -p ingress-addon-legacy-065000           | ingress-addon-legacy-065000 | jenkins | v1.31.2 | 10 Sep 23 13:57 PDT | 10 Sep 23 13:59 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-065000              | ingress-addon-legacy-065000 | jenkins | v1.31.2 | 10 Sep 23 13:59 PDT | 10 Sep 23 13:59 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-065000              | ingress-addon-legacy-065000 | jenkins | v1.31.2 | 10 Sep 23 13:59 PDT | 10 Sep 23 13:59 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-065000              | ingress-addon-legacy-065000 | jenkins | v1.31.2 | 10 Sep 23 13:59 PDT | 10 Sep 23 13:59 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-065000 ip           | ingress-addon-legacy-065000 | jenkins | v1.31.2 | 10 Sep 23 13:59 PDT | 10 Sep 23 13:59 PDT |
	| addons         | ingress-addon-legacy-065000              | ingress-addon-legacy-065000 | jenkins | v1.31.2 | 10 Sep 23 13:59 PDT | 10 Sep 23 14:00 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-065000              | ingress-addon-legacy-065000 | jenkins | v1.31.2 | 10 Sep 23 14:00 PDT | 10 Sep 23 14:00 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/10 13:57:45
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 13:57:45.746853    3008 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:57:45.746966    3008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:45.746969    3008 out.go:309] Setting ErrFile to fd 2...
	I0910 13:57:45.746971    3008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:45.747080    3008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 13:57:45.748137    3008 out.go:303] Setting JSON to false
	I0910 13:57:45.763256    3008 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1640,"bootTime":1694377825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:57:45.763321    3008 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:57:45.766317    3008 out.go:177] * [ingress-addon-legacy-065000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:57:45.774201    3008 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 13:57:45.774237    3008 notify.go:220] Checking for updates...
	I0910 13:57:45.778254    3008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:57:45.779261    3008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:57:45.782234    3008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:57:45.785275    3008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 13:57:45.788273    3008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 13:57:45.791401    3008 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:57:45.795261    3008 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 13:57:45.804205    3008 start.go:298] selected driver: qemu2
	I0910 13:57:45.804212    3008 start.go:902] validating driver "qemu2" against <nil>
	I0910 13:57:45.804217    3008 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 13:57:45.806086    3008 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 13:57:45.809223    3008 out.go:177] * Automatically selected the socket_vmnet network
	I0910 13:57:45.810533    3008 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 13:57:45.810570    3008 cni.go:84] Creating CNI manager for ""
	I0910 13:57:45.810578    3008 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 13:57:45.810582    3008 start_flags.go:321] config:
	{Name:ingress-addon-legacy-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-065000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:57:45.814560    3008 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 13:57:45.822281    3008 out.go:177] * Starting control plane node ingress-addon-legacy-065000 in cluster ingress-addon-legacy-065000
	I0910 13:57:45.826231    3008 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0910 13:57:45.879812    3008 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0910 13:57:45.879834    3008 cache.go:57] Caching tarball of preloaded images
	I0910 13:57:45.880002    3008 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0910 13:57:45.885236    3008 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0910 13:57:45.893220    3008 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:57:45.968346    3008 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0910 13:57:51.994947    3008 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:57:51.995102    3008 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:57:52.743340    3008 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0910 13:57:52.743558    3008 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/config.json ...
	I0910 13:57:52.743584    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/config.json: {Name:mk864142a1a6596b8ff7e1952e4697216cd34e0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:57:52.743834    3008 start.go:365] acquiring machines lock for ingress-addon-legacy-065000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 13:57:52.743860    3008 start.go:369] acquired machines lock for "ingress-addon-legacy-065000" in 20.875µs
	I0910 13:57:52.743870    3008 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-065000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 13:57:52.743903    3008 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 13:57:52.752926    3008 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0910 13:57:52.767526    3008 start.go:159] libmachine.API.Create for "ingress-addon-legacy-065000" (driver="qemu2")
	I0910 13:57:52.767547    3008 client.go:168] LocalClient.Create starting
	I0910 13:57:52.767624    3008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 13:57:52.767648    3008 main.go:141] libmachine: Decoding PEM data...
	I0910 13:57:52.767661    3008 main.go:141] libmachine: Parsing certificate...
	I0910 13:57:52.767703    3008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 13:57:52.767721    3008 main.go:141] libmachine: Decoding PEM data...
	I0910 13:57:52.767731    3008 main.go:141] libmachine: Parsing certificate...
	I0910 13:57:52.768045    3008 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 13:57:52.893906    3008 main.go:141] libmachine: Creating SSH key...
	I0910 13:57:53.064305    3008 main.go:141] libmachine: Creating Disk image...
	I0910 13:57:53.064311    3008 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 13:57:53.064485    3008 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/disk.qcow2
	I0910 13:57:53.073250    3008 main.go:141] libmachine: STDOUT: 
	I0910 13:57:53.073263    3008 main.go:141] libmachine: STDERR: 
	I0910 13:57:53.073321    3008 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/disk.qcow2 +20000M
	I0910 13:57:53.080593    3008 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 13:57:53.080608    3008 main.go:141] libmachine: STDERR: 
	I0910 13:57:53.080624    3008 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/disk.qcow2
	I0910 13:57:53.080630    3008 main.go:141] libmachine: Starting QEMU VM...
	I0910 13:57:53.080666    3008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:73:57:75:55:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/disk.qcow2
	I0910 13:57:53.114715    3008 main.go:141] libmachine: STDOUT: 
	I0910 13:57:53.114754    3008 main.go:141] libmachine: STDERR: 
	I0910 13:57:53.114758    3008 main.go:141] libmachine: Attempt 0
	I0910 13:57:53.114777    3008 main.go:141] libmachine: Searching for 5e:73:57:75:55:23 in /var/db/dhcpd_leases ...
	I0910 13:57:53.114845    3008 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0910 13:57:53.114867    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:9a:ea:4c:ec:aa:c9 ID:1,9a:ea:4c:ec:aa:c9 Lease:0x64ff7f35}
	I0910 13:57:53.114874    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:53.114879    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:53.114885    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:55.117005    3008 main.go:141] libmachine: Attempt 1
	I0910 13:57:55.117110    3008 main.go:141] libmachine: Searching for 5e:73:57:75:55:23 in /var/db/dhcpd_leases ...
	I0910 13:57:55.117475    3008 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0910 13:57:55.117528    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:9a:ea:4c:ec:aa:c9 ID:1,9a:ea:4c:ec:aa:c9 Lease:0x64ff7f35}
	I0910 13:57:55.117600    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:55.117635    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:55.117665    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:57.119803    3008 main.go:141] libmachine: Attempt 2
	I0910 13:57:57.119833    3008 main.go:141] libmachine: Searching for 5e:73:57:75:55:23 in /var/db/dhcpd_leases ...
	I0910 13:57:57.119944    3008 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0910 13:57:57.119958    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:9a:ea:4c:ec:aa:c9 ID:1,9a:ea:4c:ec:aa:c9 Lease:0x64ff7f35}
	I0910 13:57:57.119974    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:57.119979    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:57.119985    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:57:59.122000    3008 main.go:141] libmachine: Attempt 3
	I0910 13:57:59.122012    3008 main.go:141] libmachine: Searching for 5e:73:57:75:55:23 in /var/db/dhcpd_leases ...
	I0910 13:57:59.122106    3008 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0910 13:57:59.122122    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:9a:ea:4c:ec:aa:c9 ID:1,9a:ea:4c:ec:aa:c9 Lease:0x64ff7f35}
	I0910 13:57:59.122128    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:57:59.122134    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:57:59.122140    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:58:01.124185    3008 main.go:141] libmachine: Attempt 4
	I0910 13:58:01.124207    3008 main.go:141] libmachine: Searching for 5e:73:57:75:55:23 in /var/db/dhcpd_leases ...
	I0910 13:58:01.124304    3008 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0910 13:58:01.124318    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:9a:ea:4c:ec:aa:c9 ID:1,9a:ea:4c:ec:aa:c9 Lease:0x64ff7f35}
	I0910 13:58:01.124324    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:58:01.124342    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:58:01.124348    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:58:03.126413    3008 main.go:141] libmachine: Attempt 5
	I0910 13:58:03.126428    3008 main.go:141] libmachine: Searching for 5e:73:57:75:55:23 in /var/db/dhcpd_leases ...
	I0910 13:58:03.126490    3008 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0910 13:58:03.126501    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:9a:ea:4c:ec:aa:c9 ID:1,9a:ea:4c:ec:aa:c9 Lease:0x64ff7f35}
	I0910 13:58:03.126519    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:8b:1d:9e:ed:cd ID:1,a:8b:1d:9e:ed:cd Lease:0x64ff7e74}
	I0910 13:58:03.126524    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:8a:44:d5:f7:c5:50 ID:1,8a:44:d5:f7:c5:50 Lease:0x64fe2ce8}
	I0910 13:58:03.126529    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:0:11:b3:e0:52 ID:1,46:0:11:b3:e0:52 Lease:0x64ff7e27}
	I0910 13:58:05.128589    3008 main.go:141] libmachine: Attempt 6
	I0910 13:58:05.128670    3008 main.go:141] libmachine: Searching for 5e:73:57:75:55:23 in /var/db/dhcpd_leases ...
	I0910 13:58:05.128820    3008 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0910 13:58:05.128838    3008 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:5e:73:57:75:55:23 ID:1,5e:73:57:75:55:23 Lease:0x64ff7f5c}
	I0910 13:58:05.128846    3008 main.go:141] libmachine: Found match: 5e:73:57:75:55:23
	I0910 13:58:05.128860    3008 main.go:141] libmachine: IP: 192.168.105.6
	I0910 13:58:05.128867    3008 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0910 13:58:07.148588    3008 machine.go:88] provisioning docker machine ...
	I0910 13:58:07.148659    3008 buildroot.go:166] provisioning hostname "ingress-addon-legacy-065000"
	I0910 13:58:07.148861    3008 main.go:141] libmachine: Using SSH client type: native
	I0910 13:58:07.149844    3008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fde3b0] 0x100fe0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0910 13:58:07.149872    3008 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-065000 && echo "ingress-addon-legacy-065000" | sudo tee /etc/hostname
	I0910 13:58:07.255434    3008 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-065000
	
	I0910 13:58:07.255581    3008 main.go:141] libmachine: Using SSH client type: native
	I0910 13:58:07.256103    3008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fde3b0] 0x100fe0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0910 13:58:07.256121    3008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-065000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-065000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-065000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0910 13:58:07.341647    3008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0910 13:58:07.341685    3008 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17207-1093/.minikube CaCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17207-1093/.minikube}
	I0910 13:58:07.341698    3008 buildroot.go:174] setting up certificates
	I0910 13:58:07.341729    3008 provision.go:83] configureAuth start
	I0910 13:58:07.341742    3008 provision.go:138] copyHostCerts
	I0910 13:58:07.341791    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem
	I0910 13:58:07.341870    3008 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem, removing ...
	I0910 13:58:07.341878    3008 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem
	I0910 13:58:07.342083    3008 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.pem (1078 bytes)
	I0910 13:58:07.342310    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem
	I0910 13:58:07.342338    3008 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem, removing ...
	I0910 13:58:07.342341    3008 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem
	I0910 13:58:07.342410    3008 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/cert.pem (1123 bytes)
	I0910 13:58:07.342522    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem
	I0910 13:58:07.342552    3008 exec_runner.go:144] found /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem, removing ...
	I0910 13:58:07.342558    3008 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem
	I0910 13:58:07.342657    3008 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17207-1093/.minikube/key.pem (1675 bytes)
	I0910 13:58:07.342773    3008 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-065000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-065000]
	I0910 13:58:07.606020    3008 provision.go:172] copyRemoteCerts
	I0910 13:58:07.606087    3008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0910 13:58:07.606099    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/id_rsa Username:docker}
	I0910 13:58:07.645489    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0910 13:58:07.645536    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0910 13:58:07.652919    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0910 13:58:07.652972    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0910 13:58:07.660488    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0910 13:58:07.660532    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0910 13:58:07.667362    3008 provision.go:86] duration metric: configureAuth took 325.625625ms
	I0910 13:58:07.667378    3008 buildroot.go:189] setting minikube options for container-runtime
	I0910 13:58:07.667493    3008 config.go:182] Loaded profile config "ingress-addon-legacy-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0910 13:58:07.667541    3008 main.go:141] libmachine: Using SSH client type: native
	I0910 13:58:07.667756    3008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fde3b0] 0x100fe0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0910 13:58:07.667761    3008 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0910 13:58:07.738411    3008 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0910 13:58:07.738421    3008 buildroot.go:70] root file system type: tmpfs
	I0910 13:58:07.738482    3008 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0910 13:58:07.738535    3008 main.go:141] libmachine: Using SSH client type: native
	I0910 13:58:07.738788    3008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fde3b0] 0x100fe0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0910 13:58:07.738826    3008 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0910 13:58:07.815462    3008 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0910 13:58:07.815529    3008 main.go:141] libmachine: Using SSH client type: native
	I0910 13:58:07.815790    3008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fde3b0] 0x100fe0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0910 13:58:07.815800    3008 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0910 13:58:08.187745    3008 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0910 13:58:08.187756    3008 machine.go:91] provisioned docker machine in 1.03915s
	I0910 13:58:08.187761    3008 client.go:171] LocalClient.Create took 15.420374084s
	I0910 13:58:08.187775    3008 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-065000" took 15.420416875s
	I0910 13:58:08.187782    3008 start.go:300] post-start starting for "ingress-addon-legacy-065000" (driver="qemu2")
	I0910 13:58:08.187786    3008 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0910 13:58:08.187853    3008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0910 13:58:08.187862    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/id_rsa Username:docker}
	I0910 13:58:08.226419    3008 ssh_runner.go:195] Run: cat /etc/os-release
	I0910 13:58:08.228009    3008 info.go:137] Remote host: Buildroot 2021.02.12
	I0910 13:58:08.228016    3008 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/addons for local assets ...
	I0910 13:58:08.228089    3008 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17207-1093/.minikube/files for local assets ...
	I0910 13:58:08.228187    3008 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem -> 22002.pem in /etc/ssl/certs
	I0910 13:58:08.228194    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem -> /etc/ssl/certs/22002.pem
	I0910 13:58:08.228300    3008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0910 13:58:08.230863    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem --> /etc/ssl/certs/22002.pem (1708 bytes)
	I0910 13:58:08.237742    3008 start.go:303] post-start completed in 49.957041ms
	I0910 13:58:08.238146    3008 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/config.json ...
	I0910 13:58:08.238308    3008 start.go:128] duration metric: createHost completed in 15.494566083s
	I0910 13:58:08.238330    3008 main.go:141] libmachine: Using SSH client type: native
	I0910 13:58:08.238550    3008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100fde3b0] 0x100fe0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0910 13:58:08.238555    3008 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0910 13:58:08.312595    3008 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694379488.796074835
	
	I0910 13:58:08.312603    3008 fix.go:206] guest clock: 1694379488.796074835
	I0910 13:58:08.312607    3008 fix.go:219] Guest: 2023-09-10 13:58:08.796074835 -0700 PDT Remote: 2023-09-10 13:58:08.23831 -0700 PDT m=+22.510466668 (delta=557.764835ms)
	I0910 13:58:08.312618    3008 fix.go:190] guest clock delta is within tolerance: 557.764835ms
	I0910 13:58:08.312621    3008 start.go:83] releasing machines lock for "ingress-addon-legacy-065000", held for 15.568921625s
	I0910 13:58:08.312915    3008 ssh_runner.go:195] Run: cat /version.json
	I0910 13:58:08.312926    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/id_rsa Username:docker}
	I0910 13:58:08.312931    3008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0910 13:58:08.312952    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/id_rsa Username:docker}
	I0910 13:58:08.352480    3008 ssh_runner.go:195] Run: systemctl --version
	I0910 13:58:08.393696    3008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0910 13:58:08.395739    3008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0910 13:58:08.395776    3008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0910 13:58:08.398905    3008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0910 13:58:08.404289    3008 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0910 13:58:08.404296    3008 start.go:466] detecting cgroup driver to use...
	I0910 13:58:08.404361    3008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:58:08.411502    3008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0910 13:58:08.414472    3008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0910 13:58:08.417545    3008 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0910 13:58:08.417565    3008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0910 13:58:08.420896    3008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:58:08.424158    3008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0910 13:58:08.426912    3008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0910 13:58:08.429891    3008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0910 13:58:08.433138    3008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0910 13:58:08.436480    3008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0910 13:58:08.438940    3008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0910 13:58:08.441843    3008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:58:08.537135    3008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0910 13:58:08.546322    3008 start.go:466] detecting cgroup driver to use...
	I0910 13:58:08.546389    3008 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0910 13:58:08.553848    3008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:58:08.562317    3008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0910 13:58:08.569360    3008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0910 13:58:08.574736    3008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 13:58:08.580273    3008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0910 13:58:08.617849    3008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0910 13:58:08.624104    3008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0910 13:58:08.629054    3008 ssh_runner.go:195] Run: which cri-dockerd
	I0910 13:58:08.630319    3008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0910 13:58:08.633243    3008 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0910 13:58:08.638310    3008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0910 13:58:08.720065    3008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0910 13:58:08.796328    3008 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0910 13:58:08.796342    3008 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0910 13:58:08.801854    3008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:58:08.891667    3008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 13:58:10.061339    3008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.169668583s)
	I0910 13:58:10.061424    3008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:58:10.072927    3008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0910 13:58:10.092077    3008 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I0910 13:58:10.092214    3008 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0910 13:58:10.093561    3008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 13:58:10.097316    3008 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0910 13:58:10.097357    3008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:58:10.102314    3008 docker.go:636] Got preloaded images: 
	I0910 13:58:10.102321    3008 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0910 13:58:10.102352    3008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 13:58:10.105275    3008 ssh_runner.go:195] Run: which lz4
	I0910 13:58:10.106483    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0910 13:58:10.106577    3008 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0910 13:58:10.107901    3008 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0910 13:58:10.107915    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0910 13:58:11.782602    3008 docker.go:600] Took 1.676088 seconds to copy over tarball
	I0910 13:58:11.782657    3008 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0910 13:58:13.093993    3008 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.311317458s)
	I0910 13:58:13.094007    3008 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0910 13:58:13.114206    3008 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0910 13:58:13.118089    3008 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0910 13:58:13.124605    3008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0910 13:58:13.204699    3008 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0910 13:58:14.741489    3008 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.536780583s)
	I0910 13:58:14.741573    3008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0910 13:58:14.748214    3008 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0910 13:58:14.748224    3008 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0910 13:58:14.748228    3008 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0910 13:58:14.759865    3008 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0910 13:58:14.759964    3008 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 13:58:14.760161    3008 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0910 13:58:14.760290    3008 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0910 13:58:14.760330    3008 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0910 13:58:14.760495    3008 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0910 13:58:14.760550    3008 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0910 13:58:14.760929    3008 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0910 13:58:14.769273    3008 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0910 13:58:14.769334    3008 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0910 13:58:14.769379    3008 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0910 13:58:14.769422    3008 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0910 13:58:14.769471    3008 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0910 13:58:14.769519    3008 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0910 13:58:14.770124    3008 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0910 13:58:14.770147    3008 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0910 13:58:15.404817    3008 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0910 13:58:15.404931    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0910 13:58:15.411171    3008 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0910 13:58:15.411197    3008 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0910 13:58:15.411235    3008 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0910 13:58:15.419710    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0910 13:58:15.458829    3008 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0910 13:58:15.458963    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0910 13:58:15.464844    3008 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0910 13:58:15.464863    3008 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0910 13:58:15.464906    3008 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0910 13:58:15.470146    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0910 13:58:15.577830    3008 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0910 13:58:15.577991    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0910 13:58:15.588089    3008 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0910 13:58:15.588115    3008 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0910 13:58:15.588162    3008 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0910 13:58:15.594471    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0910 13:58:15.779246    3008 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0910 13:58:15.779357    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0910 13:58:15.785232    3008 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0910 13:58:15.785258    3008 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0910 13:58:15.785297    3008 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0910 13:58:15.795531    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0910 13:58:15.986319    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0910 13:58:15.992129    3008 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0910 13:58:15.992154    3008 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0910 13:58:15.992197    3008 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0910 13:58:15.997805    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0910 13:58:16.231069    3008 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0910 13:58:16.231218    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0910 13:58:16.237951    3008 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0910 13:58:16.238002    3008 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0910 13:58:16.238044    3008 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0910 13:58:16.244183    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0910 13:58:16.399104    3008 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0910 13:58:16.399219    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0910 13:58:16.418714    3008 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0910 13:58:16.418738    3008 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0910 13:58:16.418779    3008 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0910 13:58:16.435685    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0910 13:58:17.103509    3008 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0910 13:58:17.104161    3008 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 13:58:17.128208    3008 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0910 13:58:17.128278    3008 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 13:58:17.128433    3008 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 13:58:17.152697    3008 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 13:58:17.152794    3008 cache_images.go:92] LoadImages completed in 2.404582042s
	W0910 13:58:17.152859    3008 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0910 13:58:17.152954    3008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0910 13:58:17.166758    3008 cni.go:84] Creating CNI manager for ""
	I0910 13:58:17.166773    3008 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 13:58:17.166790    3008 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0910 13:58:17.166804    3008 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-065000 NodeName:ingress-addon-legacy-065000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0910 13:58:17.166921    3008 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-065000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0910 13:58:17.166996    3008 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-065000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-065000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0910 13:58:17.167080    3008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0910 13:58:17.171745    3008 binaries.go:44] Found k8s binaries, skipping transfer
	I0910 13:58:17.171791    3008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0910 13:58:17.175238    3008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0910 13:58:17.181627    3008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0910 13:58:17.187392    3008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0910 13:58:17.192808    3008 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0910 13:58:17.194018    3008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0910 13:58:17.197890    3008 certs.go:56] Setting up /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000 for IP: 192.168.105.6
	I0910 13:58:17.197900    3008 certs.go:190] acquiring lock for shared ca certs: {Name:mk28134b321cd562735798fd2fcb10a58019fa5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:17.198033    3008 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key
	I0910 13:58:17.198073    3008 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key
	I0910 13:58:17.198106    3008 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.key
	I0910 13:58:17.198114    3008 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt with IP's: []
	I0910 13:58:17.362439    3008 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt ...
	I0910 13:58:17.362446    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: {Name:mkd06d813a48ae803e9a27d8a3330da31b71ff0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:17.362721    3008 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.key ...
	I0910 13:58:17.362724    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.key: {Name:mk41d0b348e4a755e86d960e00814c465e27b858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:17.362852    3008 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.key.b354f644
	I0910 13:58:17.362861    3008 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0910 13:58:17.535925    3008 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.crt.b354f644 ...
	I0910 13:58:17.535929    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.crt.b354f644: {Name:mk9154012facde0d2924e49b6cabc7770477302d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:17.536104    3008 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.key.b354f644 ...
	I0910 13:58:17.536109    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.key.b354f644: {Name:mk6c468edcf1d7e870bba8fce92b0448af5b9ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:17.536236    3008 certs.go:337] copying /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.crt
	I0910 13:58:17.536425    3008 certs.go:341] copying /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.key
	I0910 13:58:17.536534    3008 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.key
	I0910 13:58:17.536546    3008 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.crt with IP's: []
	I0910 13:58:17.620569    3008 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.crt ...
	I0910 13:58:17.620573    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.crt: {Name:mk712e2ed9b46f3def4349dcc5ce28906b123d35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:17.620730    3008 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.key ...
	I0910 13:58:17.620733    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.key: {Name:mk9946805a5ad332551449841449bd705fc63f18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:17.620849    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0910 13:58:17.620864    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0910 13:58:17.620875    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0910 13:58:17.620886    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0910 13:58:17.620898    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0910 13:58:17.620909    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0910 13:58:17.620921    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0910 13:58:17.620932    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0910 13:58:17.621008    3008 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200.pem (1338 bytes)
	W0910 13:58:17.621040    3008 certs.go:433] ignoring /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200_empty.pem, impossibly tiny 0 bytes
	I0910 13:58:17.621048    3008 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca-key.pem (1675 bytes)
	I0910 13:58:17.621073    3008 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem (1078 bytes)
	I0910 13:58:17.621096    3008 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem (1123 bytes)
	I0910 13:58:17.621130    3008 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/certs/key.pem (1675 bytes)
	I0910 13:58:17.621182    3008 certs.go:437] found cert: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem (1708 bytes)
	I0910 13:58:17.621205    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem -> /usr/share/ca-certificates/22002.pem
	I0910 13:58:17.621215    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:58:17.621224    3008 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200.pem -> /usr/share/ca-certificates/2200.pem
	I0910 13:58:17.621606    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0910 13:58:17.629486    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0910 13:58:17.636817    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0910 13:58:17.643816    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0910 13:58:17.650281    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0910 13:58:17.657298    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0910 13:58:17.664441    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0910 13:58:17.671105    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0910 13:58:17.677663    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/ssl/certs/22002.pem --> /usr/share/ca-certificates/22002.pem (1708 bytes)
	I0910 13:58:17.684844    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0910 13:58:17.691790    3008 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/2200.pem --> /usr/share/ca-certificates/2200.pem (1338 bytes)
	I0910 13:58:17.698266    3008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0910 13:58:17.703307    3008 ssh_runner.go:195] Run: openssl version
	I0910 13:58:17.705141    3008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22002.pem && ln -fs /usr/share/ca-certificates/22002.pem /etc/ssl/certs/22002.pem"
	I0910 13:58:17.708150    3008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22002.pem
	I0910 13:58:17.709628    3008 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 10 20:54 /usr/share/ca-certificates/22002.pem
	I0910 13:58:17.709648    3008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22002.pem
	I0910 13:58:17.711805    3008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22002.pem /etc/ssl/certs/3ec20f2e.0"
	I0910 13:58:17.714850    3008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0910 13:58:17.718005    3008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:58:17.719505    3008 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 10 20:53 /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:58:17.719525    3008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0910 13:58:17.721286    3008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0910 13:58:17.724518    3008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2200.pem && ln -fs /usr/share/ca-certificates/2200.pem /etc/ssl/certs/2200.pem"
	I0910 13:58:17.727228    3008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2200.pem
	I0910 13:58:17.728568    3008 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 10 20:54 /usr/share/ca-certificates/2200.pem
	I0910 13:58:17.728586    3008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2200.pem
	I0910 13:58:17.730264    3008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2200.pem /etc/ssl/certs/51391683.0"
	I0910 13:58:17.733560    3008 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0910 13:58:17.734888    3008 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0910 13:58:17.734921    3008 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-065000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-065000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:58:17.734989    3008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0910 13:58:17.740446    3008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0910 13:58:17.743335    3008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0910 13:58:17.746010    3008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0910 13:58:17.749248    3008 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0910 13:58:17.749263    3008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0910 13:58:17.775656    3008 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0910 13:58:17.775688    3008 kubeadm.go:322] [preflight] Running pre-flight checks
	I0910 13:58:17.866341    3008 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0910 13:58:17.866394    3008 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0910 13:58:17.866447    3008 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0910 13:58:17.914036    3008 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0910 13:58:17.915460    3008 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0910 13:58:17.915486    3008 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0910 13:58:18.010842    3008 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0910 13:58:18.017981    3008 out.go:204]   - Generating certificates and keys ...
	I0910 13:58:18.018015    3008 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0910 13:58:18.018047    3008 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0910 13:58:18.044008    3008 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0910 13:58:18.082353    3008 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0910 13:58:18.242982    3008 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0910 13:58:18.339961    3008 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0910 13:58:18.509351    3008 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0910 13:58:18.509415    3008 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-065000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0910 13:58:18.634777    3008 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0910 13:58:18.634843    3008 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-065000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0910 13:58:18.683196    3008 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0910 13:58:18.729538    3008 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0910 13:58:18.837204    3008 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0910 13:58:18.837232    3008 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0910 13:58:18.881064    3008 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0910 13:58:18.921906    3008 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0910 13:58:19.069449    3008 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0910 13:58:19.116925    3008 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0910 13:58:19.117245    3008 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0910 13:58:19.121476    3008 out.go:204]   - Booting up control plane ...
	I0910 13:58:19.121521    3008 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0910 13:58:19.121566    3008 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0910 13:58:19.128800    3008 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0910 13:58:19.128845    3008 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0910 13:58:19.128934    3008 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0910 13:58:30.636157    3008 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.506298 seconds
	I0910 13:58:30.636470    3008 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0910 13:58:30.658122    3008 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0910 13:58:31.172499    3008 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0910 13:58:31.172582    3008 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-065000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0910 13:58:31.704452    3008 kubeadm.go:322] [bootstrap-token] Using token: jskx3u.z97hrobme8qm6tu3
	I0910 13:58:31.708635    3008 out.go:204]   - Configuring RBAC rules ...
	I0910 13:58:31.708889    3008 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0910 13:58:31.717020    3008 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0910 13:58:31.724312    3008 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0910 13:58:31.728500    3008 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0910 13:58:31.730340    3008 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0910 13:58:31.731790    3008 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0910 13:58:31.741716    3008 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0910 13:58:31.914407    3008 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0910 13:58:32.118997    3008 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0910 13:58:32.119567    3008 kubeadm.go:322] 
	I0910 13:58:32.119612    3008 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0910 13:58:32.119616    3008 kubeadm.go:322] 
	I0910 13:58:32.119678    3008 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0910 13:58:32.119683    3008 kubeadm.go:322] 
	I0910 13:58:32.119706    3008 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0910 13:58:32.119761    3008 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0910 13:58:32.119796    3008 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0910 13:58:32.119801    3008 kubeadm.go:322] 
	I0910 13:58:32.119836    3008 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0910 13:58:32.119918    3008 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0910 13:58:32.119961    3008 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0910 13:58:32.119964    3008 kubeadm.go:322] 
	I0910 13:58:32.120034    3008 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0910 13:58:32.120094    3008 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0910 13:58:32.120100    3008 kubeadm.go:322] 
	I0910 13:58:32.120153    3008 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jskx3u.z97hrobme8qm6tu3 \
	I0910 13:58:32.120233    3008 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:10bd6c29805637182224d42f91d4bace622161cd91f1a9b0f464f3aed87a5ead \
	I0910 13:58:32.120251    3008 kubeadm.go:322]     --control-plane 
	I0910 13:58:32.120263    3008 kubeadm.go:322] 
	I0910 13:58:32.120372    3008 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0910 13:58:32.120428    3008 kubeadm.go:322] 
	I0910 13:58:32.120523    3008 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jskx3u.z97hrobme8qm6tu3 \
	I0910 13:58:32.120599    3008 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:10bd6c29805637182224d42f91d4bace622161cd91f1a9b0f464f3aed87a5ead 
	I0910 13:58:32.120791    3008 kubeadm.go:322] W0910 20:58:18.259162    1401 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0910 13:58:32.120911    3008 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0910 13:58:32.120995    3008 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I0910 13:58:32.121086    3008 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0910 13:58:32.121215    3008 kubeadm.go:322] W0910 20:58:19.604302    1401 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0910 13:58:32.121304    3008 kubeadm.go:322] W0910 20:58:19.609408    1401 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0910 13:58:32.121312    3008 cni.go:84] Creating CNI manager for ""
	I0910 13:58:32.121320    3008 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 13:58:32.121334    3008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0910 13:58:32.121422    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:32.121424    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d731e1cec1979d094cdaebcdf1ed599ff8209767 minikube.k8s.io/name=ingress-addon-legacy-065000 minikube.k8s.io/updated_at=2023_09_10T13_58_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:32.196843    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:32.207872    3008 ops.go:34] apiserver oom_adj: -16
	I0910 13:58:32.237645    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:32.774825    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:33.275010    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:33.774844    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:34.274902    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:34.774806    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:35.274861    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:35.774817    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:36.274891    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:36.774898    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:37.274575    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:37.774825    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:38.274973    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:38.774599    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:39.274828    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:39.774832    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:40.274505    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:40.774502    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:41.274799    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:41.774741    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:42.274761    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:42.774788    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:43.274768    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:43.774757    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:44.274754    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:44.774653    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:45.274514    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:45.774676    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:46.273174    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:46.774746    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:47.274507    3008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0910 13:58:47.319918    3008 kubeadm.go:1081] duration metric: took 15.198730375s to wait for elevateKubeSystemPrivileges.
	I0910 13:58:47.319931    3008 kubeadm.go:406] StartCluster complete in 29.585327958s
	I0910 13:58:47.319940    3008 settings.go:142] acquiring lock: {Name:mk5069f344fe5f68592bc6867db9aede10bc3fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:47.320101    3008 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:58:47.320456    3008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/kubeconfig: {Name:mk7c70008fc2d1b0ba569659f9157708891e79a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:58:47.320669    3008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0910 13:58:47.320732    3008 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0910 13:58:47.320771    3008 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-065000"
	I0910 13:58:47.320779    3008 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-065000"
	I0910 13:58:47.320780    3008 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-065000"
	I0910 13:58:47.320793    3008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-065000"
	I0910 13:58:47.320805    3008 host.go:66] Checking if "ingress-addon-legacy-065000" exists ...
	I0910 13:58:47.321173    3008 kapi.go:59] client config for ingress-addon-legacy-065000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.key", CAFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102399d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 13:58:47.321355    3008 config.go:182] Loaded profile config "ingress-addon-legacy-065000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0910 13:58:47.321556    3008 cert_rotation.go:137] Starting client certificate rotation controller
	I0910 13:58:47.322344    3008 kapi.go:59] client config for ingress-addon-legacy-065000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.key", CAFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102399d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 13:58:47.327740    3008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 13:58:47.331686    3008 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 13:58:47.331692    3008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0910 13:58:47.331699    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/id_rsa Username:docker}
	I0910 13:58:47.335930    3008 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-065000"
	I0910 13:58:47.335947    3008 host.go:66] Checking if "ingress-addon-legacy-065000" exists ...
	I0910 13:58:47.336621    3008 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0910 13:58:47.336627    3008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0910 13:58:47.336635    3008 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/ingress-addon-legacy-065000/id_rsa Username:docker}
	I0910 13:58:47.339522    3008 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-065000" context rescaled to 1 replicas
	I0910 13:58:47.339540    3008 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 13:58:47.342727    3008 out.go:177] * Verifying Kubernetes components...
	I0910 13:58:47.352663    3008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 13:58:47.384462    3008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0910 13:58:47.384778    3008 kapi.go:59] client config for ingress-addon-legacy-065000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.key", CAFile:"/Users/jenkins/minikube-integration/17207-1093/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102399d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0910 13:58:47.384912    3008 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-065000" to be "Ready" ...
	I0910 13:58:47.386546    3008 node_ready.go:49] node "ingress-addon-legacy-065000" has status "Ready":"True"
	I0910 13:58:47.386552    3008 node_ready.go:38] duration metric: took 1.633833ms waiting for node "ingress-addon-legacy-065000" to be "Ready" ...
	I0910 13:58:47.386556    3008 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 13:58:47.389347    3008 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-h98xs" in "kube-system" namespace to be "Ready" ...
	I0910 13:58:47.390616    3008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0910 13:58:47.401134    3008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0910 13:58:47.637737    3008 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0910 13:58:47.656117    3008 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0910 13:58:47.662958    3008 addons.go:502] enable addons completed in 342.24175ms: enabled=[default-storageclass storage-provisioner]
	I0910 13:58:49.396046    3008 pod_ready.go:102] pod "coredns-66bff467f8-h98xs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-10 13:58:47 -0700 PDT Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0910 13:58:51.405655    3008 pod_ready.go:102] pod "coredns-66bff467f8-h98xs" in "kube-system" namespace has status "Ready":"False"
	I0910 13:58:53.405934    3008 pod_ready.go:102] pod "coredns-66bff467f8-h98xs" in "kube-system" namespace has status "Ready":"False"
	I0910 13:58:55.406652    3008 pod_ready.go:102] pod "coredns-66bff467f8-h98xs" in "kube-system" namespace has status "Ready":"False"
	I0910 13:58:57.906867    3008 pod_ready.go:102] pod "coredns-66bff467f8-h98xs" in "kube-system" namespace has status "Ready":"False"
	I0910 13:59:00.398012    3008 pod_ready.go:92] pod "coredns-66bff467f8-h98xs" in "kube-system" namespace has status "Ready":"True"
	I0910 13:59:00.398025    3008 pod_ready.go:81] duration metric: took 13.008807083s waiting for pod "coredns-66bff467f8-h98xs" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.398034    3008 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.401069    3008 pod_ready.go:92] pod "etcd-ingress-addon-legacy-065000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:59:00.401078    3008 pod_ready.go:81] duration metric: took 3.040125ms waiting for pod "etcd-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.401084    3008 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.404363    3008 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-065000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:59:00.404368    3008 pod_ready.go:81] duration metric: took 3.279541ms waiting for pod "kube-apiserver-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.404372    3008 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.407248    3008 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-065000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:59:00.407256    3008 pod_ready.go:81] duration metric: took 2.880125ms waiting for pod "kube-controller-manager-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.407264    3008 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.409600    3008 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-065000" in "kube-system" namespace has status "Ready":"True"
	I0910 13:59:00.409605    3008 pod_ready.go:81] duration metric: took 2.333125ms waiting for pod "kube-scheduler-ingress-addon-legacy-065000" in "kube-system" namespace to be "Ready" ...
	I0910 13:59:00.409608    3008 pod_ready.go:38] duration metric: took 13.023186208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0910 13:59:00.409618    3008 api_server.go:52] waiting for apiserver process to appear ...
	I0910 13:59:00.409685    3008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0910 13:59:00.414968    3008 api_server.go:72] duration metric: took 13.075550958s to wait for apiserver process to appear ...
	I0910 13:59:00.414975    3008 api_server.go:88] waiting for apiserver healthz status ...
	I0910 13:59:00.414982    3008 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0910 13:59:00.418812    3008 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0910 13:59:00.419285    3008 api_server.go:141] control plane version: v1.18.20
	I0910 13:59:00.419296    3008 api_server.go:131] duration metric: took 4.318458ms to wait for apiserver health ...
	I0910 13:59:00.419299    3008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0910 13:59:00.595818    3008 request.go:629] Waited for 176.4505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0910 13:59:00.605352    3008 system_pods.go:59] 7 kube-system pods found
	I0910 13:59:00.605381    3008 system_pods.go:61] "coredns-66bff467f8-h98xs" [962614a1-9b8f-4c69-ad70-dd5d11426758] Running
	I0910 13:59:00.605391    3008 system_pods.go:61] "etcd-ingress-addon-legacy-065000" [17728df7-d0c3-47af-b1a9-d630137ea8b1] Running
	I0910 13:59:00.605400    3008 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-065000" [d50e5293-bff6-425e-bb1d-45e067570bae] Running
	I0910 13:59:00.605413    3008 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-065000" [ca3fd1a1-f2a7-480d-8b8d-c543d02bb528] Running
	I0910 13:59:00.605448    3008 system_pods.go:61] "kube-proxy-n7jwv" [85f5334a-401e-4495-b134-4fb71245e39d] Running
	I0910 13:59:00.605463    3008 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-065000" [761b9741-a4cb-4ec3-bf39-ad9cd672807c] Running
	I0910 13:59:00.605471    3008 system_pods.go:61] "storage-provisioner" [bf746941-5c94-4b46-a93f-5e6cb6b60b88] Running
	I0910 13:59:00.605482    3008 system_pods.go:74] duration metric: took 186.18ms to wait for pod list to return data ...
	I0910 13:59:00.605491    3008 default_sa.go:34] waiting for default service account to be created ...
	I0910 13:59:00.795812    3008 request.go:629] Waited for 190.231583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0910 13:59:00.802160    3008 default_sa.go:45] found service account: "default"
	I0910 13:59:00.802183    3008 default_sa.go:55] duration metric: took 196.685792ms for default service account to be created ...
	I0910 13:59:00.802200    3008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0910 13:59:00.995832    3008 request.go:629] Waited for 193.515834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0910 13:59:01.008148    3008 system_pods.go:86] 7 kube-system pods found
	I0910 13:59:01.008184    3008 system_pods.go:89] "coredns-66bff467f8-h98xs" [962614a1-9b8f-4c69-ad70-dd5d11426758] Running
	I0910 13:59:01.008198    3008 system_pods.go:89] "etcd-ingress-addon-legacy-065000" [17728df7-d0c3-47af-b1a9-d630137ea8b1] Running
	I0910 13:59:01.008207    3008 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-065000" [d50e5293-bff6-425e-bb1d-45e067570bae] Running
	I0910 13:59:01.008220    3008 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-065000" [ca3fd1a1-f2a7-480d-8b8d-c543d02bb528] Running
	I0910 13:59:01.008234    3008 system_pods.go:89] "kube-proxy-n7jwv" [85f5334a-401e-4495-b134-4fb71245e39d] Running
	I0910 13:59:01.008244    3008 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-065000" [761b9741-a4cb-4ec3-bf39-ad9cd672807c] Running
	I0910 13:59:01.008256    3008 system_pods.go:89] "storage-provisioner" [bf746941-5c94-4b46-a93f-5e6cb6b60b88] Running
	I0910 13:59:01.008271    3008 system_pods.go:126] duration metric: took 206.064584ms to wait for k8s-apps to be running ...
	I0910 13:59:01.008285    3008 system_svc.go:44] waiting for kubelet service to be running ....
	I0910 13:59:01.008492    3008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0910 13:59:01.025932    3008 system_svc.go:56] duration metric: took 17.645666ms WaitForService to wait for kubelet.
	I0910 13:59:01.025952    3008 kubeadm.go:581] duration metric: took 13.686539583s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0910 13:59:01.025974    3008 node_conditions.go:102] verifying NodePressure condition ...
	I0910 13:59:01.195843    3008 request.go:629] Waited for 169.795584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0910 13:59:01.203644    3008 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0910 13:59:01.203695    3008 node_conditions.go:123] node cpu capacity is 2
	I0910 13:59:01.203724    3008 node_conditions.go:105] duration metric: took 177.742375ms to run NodePressure ...
	I0910 13:59:01.203748    3008 start.go:228] waiting for startup goroutines ...
	I0910 13:59:01.203771    3008 start.go:233] waiting for cluster config update ...
	I0910 13:59:01.203806    3008 start.go:242] writing updated cluster config ...
	I0910 13:59:01.205148    3008 ssh_runner.go:195] Run: rm -f paused
	I0910 13:59:01.269180    3008 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0910 13:59:01.272438    3008 out.go:177] 
	W0910 13:59:01.276388    3008 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0910 13:59:01.280334    3008 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0910 13:59:01.288407    3008 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-065000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sun 2023-09-10 20:58:04 UTC, ends at Sun 2023-09-10 21:00:12 UTC. --
	Sep 10 20:59:44 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:44.945279292Z" level=info msg="shim disconnected" id=424681855b80a7352761aedeb636238e06928fcad140856117e433c243583518 namespace=moby
	Sep 10 20:59:44 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:44.945331670Z" level=warning msg="cleaning up after shim disconnected" id=424681855b80a7352761aedeb636238e06928fcad140856117e433c243583518 namespace=moby
	Sep 10 20:59:44 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:44.945336004Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[991]: time="2023-09-10T20:59:58.946900513Z" level=info msg="ignoring event" container=48014f523912ca422fe5cdd25e50b968dc88078a11c92a62d10b9230d157a233 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:58.948476754Z" level=info msg="shim disconnected" id=48014f523912ca422fe5cdd25e50b968dc88078a11c92a62d10b9230d157a233 namespace=moby
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:58.948829062Z" level=warning msg="cleaning up after shim disconnected" id=48014f523912ca422fe5cdd25e50b968dc88078a11c92a62d10b9230d157a233 namespace=moby
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:58.948840688Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:58.976086130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:58.976121548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:58.976130132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 10 20:59:58 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:58.976136466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 10 20:59:59 ingress-addon-legacy-065000 dockerd[991]: time="2023-09-10T20:59:59.013661675Z" level=info msg="ignoring event" container=023c841f41965a20e8b697d6b72c637683a6a3005e37703bc6687a5164c8569d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 20:59:59 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:59.013891853Z" level=info msg="shim disconnected" id=023c841f41965a20e8b697d6b72c637683a6a3005e37703bc6687a5164c8569d namespace=moby
	Sep 10 20:59:59 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:59.013921896Z" level=warning msg="cleaning up after shim disconnected" id=023c841f41965a20e8b697d6b72c637683a6a3005e37703bc6687a5164c8569d namespace=moby
	Sep 10 20:59:59 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T20:59:59.013926312Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[991]: time="2023-09-10T21:00:07.400402929Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=acfa707b95508dfb4037495bc83f442c18c0f80ba982283efc9bb806196ce285
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[991]: time="2023-09-10T21:00:07.409693266Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=acfa707b95508dfb4037495bc83f442c18c0f80ba982283efc9bb806196ce285
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[991]: time="2023-09-10T21:00:07.524411615Z" level=info msg="ignoring event" container=acfa707b95508dfb4037495bc83f442c18c0f80ba982283efc9bb806196ce285 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T21:00:07.524756046Z" level=info msg="shim disconnected" id=acfa707b95508dfb4037495bc83f442c18c0f80ba982283efc9bb806196ce285 namespace=moby
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T21:00:07.524816215Z" level=warning msg="cleaning up after shim disconnected" id=acfa707b95508dfb4037495bc83f442c18c0f80ba982283efc9bb806196ce285 namespace=moby
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T21:00:07.524826466Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[991]: time="2023-09-10T21:00:07.556686933Z" level=info msg="ignoring event" container=b4431f7c2c26cb9b00f35fc59b5f7e896c13c9ba03a7cd93072fd93e02f4b04e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T21:00:07.557009363Z" level=info msg="shim disconnected" id=b4431f7c2c26cb9b00f35fc59b5f7e896c13c9ba03a7cd93072fd93e02f4b04e namespace=moby
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T21:00:07.557045614Z" level=warning msg="cleaning up after shim disconnected" id=b4431f7c2c26cb9b00f35fc59b5f7e896c13c9ba03a7cd93072fd93e02f4b04e namespace=moby
	Sep 10 21:00:07 ingress-addon-legacy-065000 dockerd[997]: time="2023-09-10T21:00:07.557051239Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	023c841f41965       a39a074194753                                                                                                      14 seconds ago       Exited              hello-world-app           2                   67eeeb20d4360
	c8cb2d410c576       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      37 seconds ago       Running             nginx                     0                   0ed063714c358
	acfa707b95508       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   59 seconds ago       Exited              controller                0                   b4431f7c2c26c
	dc8160f794ae4       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   5d8136a1f0cf8
	750e0133da180       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   bcc5467b84ea6
	99a74643d1f0f       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   99192dd91788e
	c67b2913f9c7c       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   10c53a759b39f
	809ae83e52cea       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   f97a3c5cf0158
	3feaee3bf22cd       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   f867063e23a3d
	b967fb497cc6b       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   bd85ca008f8ef
	2c6e4c0ff4311       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   5abbed5ddeba2
	9a38a5b2cef90       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   337657826fcc9
	
	* 
	* ==> coredns [99a74643d1f0] <==
	* [INFO] 172.17.0.1:26201 - 63622 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015334s
	[INFO] 172.17.0.1:26201 - 46471 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000029127s
	[INFO] 172.17.0.1:35709 - 31720 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030585s
	[INFO] 172.17.0.1:26201 - 26238 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035086s
	[INFO] 172.17.0.1:35709 - 39398 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008875s
	[INFO] 172.17.0.1:26201 - 59606 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036044s
	[INFO] 172.17.0.1:35709 - 39989 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00000925s
	[INFO] 172.17.0.1:26201 - 9885 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023627s
	[INFO] 172.17.0.1:35709 - 65518 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000017668s
	[INFO] 172.17.0.1:26201 - 31201 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034169s
	[INFO] 172.17.0.1:35709 - 62035 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011251s
	[INFO] 172.17.0.1:62541 - 35462 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000017501s
	[INFO] 172.17.0.1:62541 - 57810 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000016209s
	[INFO] 172.17.0.1:62541 - 61689 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012834s
	[INFO] 172.17.0.1:62541 - 6404 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011959s
	[INFO] 172.17.0.1:62541 - 50780 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011417s
	[INFO] 172.17.0.1:62541 - 30235 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011126s
	[INFO] 172.17.0.1:62541 - 27361 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011959s
	[INFO] 172.17.0.1:29228 - 25410 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019584s
	[INFO] 172.17.0.1:29228 - 64028 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010292s
	[INFO] 172.17.0.1:29228 - 12166 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015834s
	[INFO] 172.17.0.1:29228 - 22195 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012834s
	[INFO] 172.17.0.1:29228 - 56252 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002596s
	[INFO] 172.17.0.1:29228 - 28493 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000016168s
	[INFO] 172.17.0.1:29228 - 3732 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032794s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-065000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-065000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d731e1cec1979d094cdaebcdf1ed599ff8209767
	                    minikube.k8s.io/name=ingress-addon-legacy-065000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_10T13_58_32_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 10 Sep 2023 20:58:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-065000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 10 Sep 2023 21:00:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 10 Sep 2023 21:00:09 +0000   Sun, 10 Sep 2023 20:58:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 10 Sep 2023 21:00:09 +0000   Sun, 10 Sep 2023 20:58:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 10 Sep 2023 21:00:09 +0000   Sun, 10 Sep 2023 20:58:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 10 Sep 2023 21:00:09 +0000   Sun, 10 Sep 2023 20:58:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-065000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b964eefc67d4e6484f886c2b2acac99
	  System UUID:                5b964eefc67d4e6484f886c2b2acac99
	  Boot ID:                    912e81d3-8f93-46db-8063-bf4edc59eb2b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-v5sgl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 coredns-66bff467f8-h98xs                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     85s
	  kube-system                 etcd-ingress-addon-legacy-065000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-apiserver-ingress-addon-legacy-065000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-065000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-n7jwv                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-ingress-addon-legacy-065000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 94s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  94s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s   kubelet     Node ingress-addon-legacy-065000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s   kubelet     Node ingress-addon-legacy-065000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s   kubelet     Node ingress-addon-legacy-065000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                94s   kubelet     Node ingress-addon-legacy-065000 status is now: NodeReady
	  Normal  Starting                 84s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep10 20:58] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.662307] EINJ: EINJ table not found.
	[  +0.519086] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043694] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000804] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.377198] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.074237] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.498008] systemd-fstab-generator[715]: Ignoring "noauto" for root device
	[  +0.182543] systemd-fstab-generator[755]: Ignoring "noauto" for root device
	[  +0.076927] systemd-fstab-generator[766]: Ignoring "noauto" for root device
	[  +0.094964] systemd-fstab-generator[779]: Ignoring "noauto" for root device
	[  +4.313357] systemd-fstab-generator[984]: Ignoring "noauto" for root device
	[  +1.515133] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.282654] systemd-fstab-generator[1523]: Ignoring "noauto" for root device
	[  +7.361103] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.082635] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.380616] systemd-fstab-generator[2593]: Ignoring "noauto" for root device
	[ +16.324341] kauditd_printk_skb: 7 callbacks suppressed
	[Sep10 20:59] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.060431] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +35.891774] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [9a38a5b2cef9] <==
	* raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/10 20:58:27 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-10 20:58:27.043307 W | auth: simple token is not cryptographically signed
	2023-09-10 20:58:27.044055 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-10 20:58:27.044723 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-10 20:58:27.045998 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-10 20:58:27.046072 I | embed: listening for peers on 192.168.105.6:2380
	2023-09-10 20:58:27.046111 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-09-10 20:58:27.046173 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/10 20:58:27 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/10 20:58:27 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-10 20:58:27.335297 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-10 20:58:27.336031 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-10 20:58:27.336088 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-10 20:58:27.336122 I | etcdserver: published {Name:ingress-addon-legacy-065000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-10 20:58:27.336165 I | embed: ready to serve client requests
	2023-09-10 20:58:27.336211 I | embed: ready to serve client requests
	2023-09-10 20:58:27.336923 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-10 20:58:27.341113 I | embed: serving client requests on 192.168.105.6:2379
	
	* 
	* ==> kernel <==
	*  21:00:12 up 2 min,  0 users,  load average: 0.48, 0.17, 0.06
	Linux ingress-addon-legacy-065000 5.10.57 #1 SMP PREEMPT Thu Sep 7 12:06:54 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3feaee3bf22c] <==
	* E0910 20:58:29.515071       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0910 20:58:29.585352       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0910 20:58:29.585399       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0910 20:58:29.585408       1 cache.go:39] Caches are synced for autoregister controller
	I0910 20:58:29.585420       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0910 20:58:29.586053       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0910 20:58:30.483991       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0910 20:58:30.484275       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0910 20:58:30.496899       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0910 20:58:30.503465       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0910 20:58:30.503496       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0910 20:58:30.640519       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0910 20:58:30.652948       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0910 20:58:30.757575       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0910 20:58:30.758187       1 controller.go:609] quota admission added evaluator for: endpoints
	I0910 20:58:30.759961       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0910 20:58:31.797849       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0910 20:58:32.385461       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0910 20:58:32.565395       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0910 20:58:38.808069       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0910 20:58:47.305156       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0910 20:58:47.799070       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0910 20:59:01.692578       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0910 20:59:32.086062       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0910 21:00:05.400480       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [b967fb497cc6] <==
	* W0910 20:58:47.662951       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-065000. Assuming now as a timestamp.
	I0910 20:58:47.662965       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0910 20:58:47.663130       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0910 20:58:47.663269       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-065000", UID:"05ef222e-d49c-4a22-8270-cb53e4c29190", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-065000 event: Registered Node ingress-addon-legacy-065000 in Controller
	I0910 20:58:47.757335       1 shared_informer.go:230] Caches are synced for stateful set 
	I0910 20:58:47.774080       1 shared_informer.go:230] Caches are synced for resource quota 
	I0910 20:58:47.796773       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0910 20:58:47.803000       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"3b6b71e5-0cca-46d9-8c11-c13c0f7509d3", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-n7jwv
	I0910 20:58:47.808890       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0910 20:58:47.808899       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0910 20:58:47.849770       1 shared_informer.go:230] Caches are synced for disruption 
	I0910 20:58:47.849779       1 disruption.go:339] Sending events to api server.
	I0910 20:58:47.849810       1 shared_informer.go:230] Caches are synced for resource quota 
	I0910 20:58:47.855640       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0910 20:58:48.050089       1 request.go:621] Throttling request took 1.035891449s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I0910 20:58:48.504258       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0910 20:58:48.504280       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0910 20:59:01.687553       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d16430e4-6f94-4d5e-ac21-b7439a688bb8", APIVersion:"apps/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0910 20:59:01.695997       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"e11c178c-bc43-484d-ae70-725fff4a39e9", APIVersion:"apps/v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-nq4z2
	I0910 20:59:01.709759       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d25be5d2-af62-4234-8d46-58ba7f07b6b3", APIVersion:"batch/v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-kndv6
	I0910 20:59:01.736349       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7e3c44b3-cd57-4cad-befb-f2febd8b50fc", APIVersion:"batch/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-scszs
	I0910 20:59:05.245435       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d25be5d2-af62-4234-8d46-58ba7f07b6b3", APIVersion:"batch/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0910 20:59:06.253364       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7e3c44b3-cd57-4cad-befb-f2febd8b50fc", APIVersion:"batch/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0910 20:59:42.373006       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"08b43703-b782-4433-b67e-346f0f38bca5", APIVersion:"apps/v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0910 20:59:42.382931       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"a6a86b35-2cbd-4220-baa9-24d140ec3576", APIVersion:"apps/v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-v5sgl
	
	* 
	* ==> kube-proxy [809ae83e52ce] <==
	* W0910 20:58:48.313258       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0910 20:58:48.317643       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0910 20:58:48.317657       1 server_others.go:186] Using iptables Proxier.
	I0910 20:58:48.317763       1 server.go:583] Version: v1.18.20
	I0910 20:58:48.319496       1 config.go:315] Starting service config controller
	I0910 20:58:48.319502       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0910 20:58:48.319517       1 config.go:133] Starting endpoints config controller
	I0910 20:58:48.319519       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0910 20:58:48.419578       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0910 20:58:48.419642       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2c6e4c0ff431] <==
	* W0910 20:58:29.512586       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0910 20:58:29.512602       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0910 20:58:29.532926       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0910 20:58:29.533012       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0910 20:58:29.534059       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0910 20:58:29.534142       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 20:58:29.534190       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0910 20:58:29.534230       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0910 20:58:29.536247       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0910 20:58:29.538043       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0910 20:58:29.538373       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 20:58:29.538443       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0910 20:58:29.538490       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0910 20:58:29.538784       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0910 20:58:29.538836       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0910 20:58:29.538948       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0910 20:58:29.538999       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0910 20:58:29.539054       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 20:58:29.539091       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0910 20:58:29.539144       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 20:58:30.437376       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0910 20:58:30.460038       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0910 20:58:30.465096       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0910 20:58:30.490353       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0910 20:58:32.434352       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sun 2023-09-10 20:58:04 UTC, ends at Sun 2023-09-10 21:00:12 UTC. --
	Sep 10 20:59:46 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:46.909922    2599 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 424681855b80a7352761aedeb636238e06928fcad140856117e433c243583518
	Sep 10 20:59:46 ingress-addon-legacy-065000 kubelet[2599]: E0910 20:59:46.910390    2599 pod_workers.go:191] Error syncing pod 8efb9163-8360-40a0-8c79-d6bed61687f6 ("hello-world-app-5f5d8b66bb-v5sgl_default(8efb9163-8360-40a0-8c79-d6bed61687f6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-v5sgl_default(8efb9163-8360-40a0-8c79-d6bed61687f6)"
	Sep 10 20:59:51 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:51.874276    2599 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d0fb0992f026e96b61b2c69055a311d5207eb9afc42521e338b5597907d2e60d
	Sep 10 20:59:51 ingress-addon-legacy-065000 kubelet[2599]: E0910 20:59:51.876672    2599 pod_workers.go:191] Error syncing pod dab16d27-b2aa-4663-bdf2-7a28602b3f17 ("kube-ingress-dns-minikube_kube-system(dab16d27-b2aa-4663-bdf2-7a28602b3f17)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(dab16d27-b2aa-4663-bdf2-7a28602b3f17)"
	Sep 10 20:59:57 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:57.850435    2599 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-sb9rv" (UniqueName: "kubernetes.io/secret/dab16d27-b2aa-4663-bdf2-7a28602b3f17-minikube-ingress-dns-token-sb9rv") pod "dab16d27-b2aa-4663-bdf2-7a28602b3f17" (UID: "dab16d27-b2aa-4663-bdf2-7a28602b3f17")
	Sep 10 20:59:57 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:57.852537    2599 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab16d27-b2aa-4663-bdf2-7a28602b3f17-minikube-ingress-dns-token-sb9rv" (OuterVolumeSpecName: "minikube-ingress-dns-token-sb9rv") pod "dab16d27-b2aa-4663-bdf2-7a28602b3f17" (UID: "dab16d27-b2aa-4663-bdf2-7a28602b3f17"). InnerVolumeSpecName "minikube-ingress-dns-token-sb9rv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 10 20:59:57 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:57.950924    2599 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-sb9rv" (UniqueName: "kubernetes.io/secret/dab16d27-b2aa-4663-bdf2-7a28602b3f17-minikube-ingress-dns-token-sb9rv") on node "ingress-addon-legacy-065000" DevicePath ""
	Sep 10 20:59:58 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:58.870515    2599 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 424681855b80a7352761aedeb636238e06928fcad140856117e433c243583518
	Sep 10 20:59:59 ingress-addon-legacy-065000 kubelet[2599]: W0910 20:59:59.028467    2599 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod8efb9163-8360-40a0-8c79-d6bed61687f6/023c841f41965a20e8b697d6b72c637683a6a3005e37703bc6687a5164c8569d": none of the resources are being tracked.
	Sep 10 20:59:59 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:59.087322    2599 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d0fb0992f026e96b61b2c69055a311d5207eb9afc42521e338b5597907d2e60d
	Sep 10 20:59:59 ingress-addon-legacy-065000 kubelet[2599]: W0910 20:59:59.089669    2599 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-v5sgl through plugin: invalid network status for
	Sep 10 20:59:59 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:59.093902    2599 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 424681855b80a7352761aedeb636238e06928fcad140856117e433c243583518
	Sep 10 20:59:59 ingress-addon-legacy-065000 kubelet[2599]: I0910 20:59:59.094021    2599 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 023c841f41965a20e8b697d6b72c637683a6a3005e37703bc6687a5164c8569d
	Sep 10 20:59:59 ingress-addon-legacy-065000 kubelet[2599]: E0910 20:59:59.094162    2599 pod_workers.go:191] Error syncing pod 8efb9163-8360-40a0-8c79-d6bed61687f6 ("hello-world-app-5f5d8b66bb-v5sgl_default(8efb9163-8360-40a0-8c79-d6bed61687f6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-v5sgl_default(8efb9163-8360-40a0-8c79-d6bed61687f6)"
	Sep 10 21:00:00 ingress-addon-legacy-065000 kubelet[2599]: W0910 21:00:00.117379    2599 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-v5sgl through plugin: invalid network status for
	Sep 10 21:00:05 ingress-addon-legacy-065000 kubelet[2599]: E0910 21:00:05.391298    2599 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nq4z2.1783a5461df404b7", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nq4z2", UID:"06651b3d-819a-4ec0-838c-e7fe99b3a211", APIVersion:"v1", ResourceVersion:"446", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-065000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137a9755742f2b7, ext:93028252967, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137a9755742f2b7, ext:93028252967, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nq4z2.1783a5461df404b7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 10 21:00:05 ingress-addon-legacy-065000 kubelet[2599]: E0910 21:00:05.402956    2599 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nq4z2.1783a5461df404b7", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nq4z2", UID:"06651b3d-819a-4ec0-838c-e7fe99b3a211", APIVersion:"v1", ResourceVersion:"446", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-065000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137a9755742f2b7, ext:93028252967, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137a975579d4203, ext:93034171506, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nq4z2.1783a5461df404b7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 10 21:00:08 ingress-addon-legacy-065000 kubelet[2599]: W0910 21:00:08.248581    2599 pod_container_deletor.go:77] Container "b4431f7c2c26cb9b00f35fc59b5f7e896c13c9ba03a7cd93072fd93e02f4b04e" not found in pod's containers
	Sep 10 21:00:09 ingress-addon-legacy-065000 kubelet[2599]: I0910 21:00:09.616238    2599 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-nfxcn" (UniqueName: "kubernetes.io/secret/06651b3d-819a-4ec0-838c-e7fe99b3a211-ingress-nginx-token-nfxcn") pod "06651b3d-819a-4ec0-838c-e7fe99b3a211" (UID: "06651b3d-819a-4ec0-838c-e7fe99b3a211")
	Sep 10 21:00:09 ingress-addon-legacy-065000 kubelet[2599]: I0910 21:00:09.616359    2599 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/06651b3d-819a-4ec0-838c-e7fe99b3a211-webhook-cert") pod "06651b3d-819a-4ec0-838c-e7fe99b3a211" (UID: "06651b3d-819a-4ec0-838c-e7fe99b3a211")
	Sep 10 21:00:09 ingress-addon-legacy-065000 kubelet[2599]: I0910 21:00:09.627377    2599 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06651b3d-819a-4ec0-838c-e7fe99b3a211-ingress-nginx-token-nfxcn" (OuterVolumeSpecName: "ingress-nginx-token-nfxcn") pod "06651b3d-819a-4ec0-838c-e7fe99b3a211" (UID: "06651b3d-819a-4ec0-838c-e7fe99b3a211"). InnerVolumeSpecName "ingress-nginx-token-nfxcn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 10 21:00:09 ingress-addon-legacy-065000 kubelet[2599]: I0910 21:00:09.627948    2599 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06651b3d-819a-4ec0-838c-e7fe99b3a211-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "06651b3d-819a-4ec0-838c-e7fe99b3a211" (UID: "06651b3d-819a-4ec0-838c-e7fe99b3a211"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 10 21:00:09 ingress-addon-legacy-065000 kubelet[2599]: I0910 21:00:09.719112    2599 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/06651b3d-819a-4ec0-838c-e7fe99b3a211-webhook-cert") on node "ingress-addon-legacy-065000" DevicePath ""
	Sep 10 21:00:09 ingress-addon-legacy-065000 kubelet[2599]: I0910 21:00:09.719195    2599 reconciler.go:319] Volume detached for volume "ingress-nginx-token-nfxcn" (UniqueName: "kubernetes.io/secret/06651b3d-819a-4ec0-838c-e7fe99b3a211-ingress-nginx-token-nfxcn") on node "ingress-addon-legacy-065000" DevicePath ""
	Sep 10 21:00:10 ingress-addon-legacy-065000 kubelet[2599]: W0910 21:00:10.885733    2599 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/06651b3d-819a-4ec0-838c-e7fe99b3a211/volumes" does not exist
	
	* 
	* ==> storage-provisioner [c67b2913f9c7] <==
	* I0910 20:58:49.616654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0910 20:58:49.620737       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0910 20:58:49.620764       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0910 20:58:49.623528       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0910 20:58:49.623631       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-065000_558ed0c7-b27a-4cb6-b684-01601aed1052!
	I0910 20:58:49.624875       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"551b7f73-45f1-49d3-b54f-ae4927289296", APIVersion:"v1", ResourceVersion:"380", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-065000_558ed0c7-b27a-4cb6-b684-01601aed1052 became leader
	I0910 20:58:49.723681       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-065000_558ed0c7-b27a-4cb6-b684-01601aed1052!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-065000 -n ingress-addon-legacy-065000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-065000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (57.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-880000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-880000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.318086709s)

                                                
                                                
-- stdout --
	* [mount-start-1-880000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-880000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-880000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-880000 -n mount-start-1-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-880000 -n mount-start-1-880000: exit status 7 (70.203958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-880000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-362000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-362000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.834633375s)

                                                
                                                
-- stdout --
	* [multinode-362000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-362000 in cluster multinode-362000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-362000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:02:24.057242    3370 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:02:24.057369    3370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:02:24.057373    3370 out.go:309] Setting ErrFile to fd 2...
	I0910 14:02:24.057375    3370 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:02:24.057480    3370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:02:24.058509    3370 out.go:303] Setting JSON to false
	I0910 14:02:24.073499    3370 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1919,"bootTime":1694377825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:02:24.073562    3370 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:02:24.077483    3370 out.go:177] * [multinode-362000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:02:24.084505    3370 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:02:24.089432    3370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:02:24.084584    3370 notify.go:220] Checking for updates...
	I0910 14:02:24.095493    3370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:02:24.098513    3370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:02:24.101546    3370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:02:24.104488    3370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:02:24.107568    3370 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:02:24.111392    3370 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:02:24.118409    3370 start.go:298] selected driver: qemu2
	I0910 14:02:24.118416    3370 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:02:24.118424    3370 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:02:24.120435    3370 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:02:24.123493    3370 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:02:24.126599    3370 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:02:24.126619    3370 cni.go:84] Creating CNI manager for ""
	I0910 14:02:24.126623    3370 cni.go:136] 0 nodes found, recommending kindnet
	I0910 14:02:24.126627    3370 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 14:02:24.126631    3370 start_flags.go:321] config:
	{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-362000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0910 14:02:24.130895    3370 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:02:24.138445    3370 out.go:177] * Starting control plane node multinode-362000 in cluster multinode-362000
	I0910 14:02:24.142289    3370 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:02:24.142308    3370 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:02:24.142325    3370 cache.go:57] Caching tarball of preloaded images
	I0910 14:02:24.142385    3370 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:02:24.142390    3370 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:02:24.142589    3370 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/multinode-362000/config.json ...
	I0910 14:02:24.142602    3370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/multinode-362000/config.json: {Name:mk7631b3b04cbcceaee6e696218d3fbb49f8688e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:02:24.142809    3370 start.go:365] acquiring machines lock for multinode-362000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:02:24.142838    3370 start.go:369] acquired machines lock for "multinode-362000" in 23.833µs
	I0910 14:02:24.142850    3370 start.go:93] Provisioning new machine with config: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-362000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:02:24.142877    3370 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:02:24.150474    3370 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:02:24.166606    3370 start.go:159] libmachine.API.Create for "multinode-362000" (driver="qemu2")
	I0910 14:02:24.166642    3370 client.go:168] LocalClient.Create starting
	I0910 14:02:24.166704    3370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:02:24.166732    3370 main.go:141] libmachine: Decoding PEM data...
	I0910 14:02:24.166754    3370 main.go:141] libmachine: Parsing certificate...
	I0910 14:02:24.166798    3370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:02:24.166819    3370 main.go:141] libmachine: Decoding PEM data...
	I0910 14:02:24.166826    3370 main.go:141] libmachine: Parsing certificate...
	I0910 14:02:24.167180    3370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:02:24.341778    3370 main.go:141] libmachine: Creating SSH key...
	I0910 14:02:24.461211    3370 main.go:141] libmachine: Creating Disk image...
	I0910 14:02:24.461217    3370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:02:24.461375    3370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:02:24.469971    3370 main.go:141] libmachine: STDOUT: 
	I0910 14:02:24.469984    3370 main.go:141] libmachine: STDERR: 
	I0910 14:02:24.470043    3370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2 +20000M
	I0910 14:02:24.477187    3370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:02:24.477201    3370 main.go:141] libmachine: STDERR: 
	I0910 14:02:24.477214    3370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:02:24.477219    3370 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:02:24.477253    3370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e6:8b:95:45:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:02:24.478718    3370 main.go:141] libmachine: STDOUT: 
	I0910 14:02:24.478730    3370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:02:24.478749    3370 client.go:171] LocalClient.Create took 312.099958ms
	I0910 14:02:26.481024    3370 start.go:128] duration metric: createHost completed in 2.338132167s
	I0910 14:02:26.481088    3370 start.go:83] releasing machines lock for "multinode-362000", held for 2.338245542s
	W0910 14:02:26.481127    3370 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:02:26.490551    3370 out.go:177] * Deleting "multinode-362000" in qemu2 ...
	W0910 14:02:26.512277    3370 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:02:26.512304    3370 start.go:687] Will try again in 5 seconds ...
	I0910 14:02:31.514481    3370 start.go:365] acquiring machines lock for multinode-362000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:02:31.514965    3370 start.go:369] acquired machines lock for "multinode-362000" in 372.625µs
	I0910 14:02:31.515080    3370 start.go:93] Provisioning new machine with config: &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-362000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:02:31.515400    3370 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:02:31.521063    3370 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:02:31.566778    3370 start.go:159] libmachine.API.Create for "multinode-362000" (driver="qemu2")
	I0910 14:02:31.566824    3370 client.go:168] LocalClient.Create starting
	I0910 14:02:31.566946    3370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:02:31.567002    3370 main.go:141] libmachine: Decoding PEM data...
	I0910 14:02:31.567019    3370 main.go:141] libmachine: Parsing certificate...
	I0910 14:02:31.567092    3370 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:02:31.567133    3370 main.go:141] libmachine: Decoding PEM data...
	I0910 14:02:31.567148    3370 main.go:141] libmachine: Parsing certificate...
	I0910 14:02:31.567662    3370 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:02:31.697598    3370 main.go:141] libmachine: Creating SSH key...
	I0910 14:02:31.804869    3370 main.go:141] libmachine: Creating Disk image...
	I0910 14:02:31.804874    3370 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:02:31.804999    3370 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:02:31.813427    3370 main.go:141] libmachine: STDOUT: 
	I0910 14:02:31.813441    3370 main.go:141] libmachine: STDERR: 
	I0910 14:02:31.813487    3370 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2 +20000M
	I0910 14:02:31.820680    3370 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:02:31.820696    3370 main.go:141] libmachine: STDERR: 
	I0910 14:02:31.820709    3370 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:02:31.820713    3370 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:02:31.820752    3370 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b9:bf:c5:2e:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:02:31.822313    3370 main.go:141] libmachine: STDOUT: 
	I0910 14:02:31.822324    3370 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:02:31.822336    3370 client.go:171] LocalClient.Create took 255.504917ms
	I0910 14:02:33.824512    3370 start.go:128] duration metric: createHost completed in 2.309085125s
	I0910 14:02:33.824591    3370 start.go:83] releasing machines lock for "multinode-362000", held for 2.30960225s
	W0910 14:02:33.825118    3370 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:02:33.834930    3370 out.go:177] 
	W0910 14:02:33.838967    3370 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:02:33.839001    3370 out.go:239] * 
	* 
	W0910 14:02:33.841828    3370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:02:33.850891    3370 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-362000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (65.858667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (83.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (124.567958ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-362000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- rollout status deployment/busybox: exit status 1 (57.145625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.716625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0910 14:02:34.848034    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.774833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.715792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.42575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.1855ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.463042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.4415ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.397458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.550083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0910 14:03:56.770455    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.232459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.351375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.442583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.606333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.337042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.70275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (83.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-362000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.021ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.406417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-362000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-362000 -v 3 --alsologtostderr: exit status 89 (41.711042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-362000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:03:57.723017    3463 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:03:57.723252    3463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:57.723255    3463 out.go:309] Setting ErrFile to fd 2...
	I0910 14:03:57.723257    3463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:57.723383    3463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:03:57.723606    3463 mustload.go:65] Loading cluster: multinode-362000
	I0910 14:03:57.723774    3463 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:03:57.728557    3463 out.go:177] * The control plane node must be running for this command
	I0910 14:03:57.732599    3463 out.go:177]   To start a cluster, run: "minikube start -p multinode-362000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-362000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.328708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-362000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-362000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-362000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.1\",\"ClusterName\":\"multinode-362000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.080375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 status --output json --alsologtostderr: exit status 7 (29.0925ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-362000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:03:57.898264    3473 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:03:57.898387    3473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:57.898390    3473 out.go:309] Setting ErrFile to fd 2...
	I0910 14:03:57.898392    3473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:57.898497    3473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:03:57.898606    3473 out.go:303] Setting JSON to true
	I0910 14:03:57.898624    3473 mustload.go:65] Loading cluster: multinode-362000
	I0910 14:03:57.898669    3473 notify.go:220] Checking for updates...
	I0910 14:03:57.898783    3473 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:03:57.898787    3473 status.go:255] checking status of multinode-362000 ...
	I0910 14:03:57.898962    3473 status.go:330] multinode-362000 host status = "Stopped" (err=<nil>)
	I0910 14:03:57.898965    3473 status.go:343] host is not running, skipping remaining checks
	I0910 14:03:57.898967    3473 status.go:257] multinode-362000 status: &{Name:multinode-362000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-362000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.531209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 node stop m03: exit status 85 (47.206583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-362000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 status: exit status 7 (29.525833ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr: exit status 7 (29.5225ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:03:58.034871    3481 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:03:58.034996    3481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:58.034999    3481 out.go:309] Setting ErrFile to fd 2...
	I0910 14:03:58.035009    3481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:58.035122    3481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:03:58.035241    3481 out.go:303] Setting JSON to false
	I0910 14:03:58.035253    3481 mustload.go:65] Loading cluster: multinode-362000
	I0910 14:03:58.035316    3481 notify.go:220] Checking for updates...
	I0910 14:03:58.035428    3481 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:03:58.035433    3481 status.go:255] checking status of multinode-362000 ...
	I0910 14:03:58.035619    3481 status.go:330] multinode-362000 host status = "Stopped" (err=<nil>)
	I0910 14:03:58.035623    3481 status.go:343] host is not running, skipping remaining checks
	I0910 14:03:58.035625    3481 status.go:257] multinode-362000 status: &{Name:multinode-362000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr": multinode-362000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.26925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 node start m03 --alsologtostderr: exit status 85 (44.136875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:03:58.093444    3485 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:03:58.093634    3485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:58.093637    3485 out.go:309] Setting ErrFile to fd 2...
	I0910 14:03:58.093639    3485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:58.093747    3485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:03:58.093965    3485 mustload.go:65] Loading cluster: multinode-362000
	I0910 14:03:58.094130    3485 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:03:58.097430    3485 out.go:177] 
	W0910 14:03:58.100327    3485 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0910 14:03:58.100331    3485 out.go:239] * 
	* 
	W0910 14:03:58.101860    3485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:03:58.105243    3485 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0910 14:03:58.093444    3485 out.go:296] Setting OutFile to fd 1 ...
I0910 14:03:58.093634    3485 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 14:03:58.093637    3485 out.go:309] Setting ErrFile to fd 2...
I0910 14:03:58.093639    3485 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 14:03:58.093747    3485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
I0910 14:03:58.093965    3485 mustload.go:65] Loading cluster: multinode-362000
I0910 14:03:58.094130    3485 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 14:03:58.097430    3485 out.go:177] 
W0910 14:03:58.100327    3485 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0910 14:03:58.100331    3485 out.go:239] * 
* 
W0910 14:03:58.101860    3485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0910 14:03:58.105243    3485 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-362000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 status: exit status 7 (29.472875ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-362000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.258125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-362000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-362000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.17911375s)

                                                
                                                
-- stdout --
	* [multinode-362000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-362000 in cluster multinode-362000
	* Restarting existing qemu2 VM for "multinode-362000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-362000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:03:58.285786    3495 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:03:58.285910    3495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:58.285913    3495 out.go:309] Setting ErrFile to fd 2...
	I0910 14:03:58.285915    3495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:03:58.286032    3495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:03:58.286943    3495 out.go:303] Setting JSON to false
	I0910 14:03:58.302298    3495 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2013,"bootTime":1694377825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:03:58.302364    3495 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:03:58.306266    3495 out.go:177] * [multinode-362000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:03:58.313335    3495 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:03:58.313391    3495 notify.go:220] Checking for updates...
	I0910 14:03:58.321246    3495 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:03:58.324364    3495 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:03:58.327289    3495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:03:58.330316    3495 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:03:58.333333    3495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:03:58.336610    3495 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:03:58.336658    3495 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:03:58.341276    3495 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:03:58.348198    3495 start.go:298] selected driver: qemu2
	I0910 14:03:58.348205    3495 start.go:902] validating driver "qemu2" against &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-362000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:03:58.348283    3495 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:03:58.350179    3495 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:03:58.350203    3495 cni.go:84] Creating CNI manager for ""
	I0910 14:03:58.350208    3495 cni.go:136] 1 nodes found, recommending kindnet
	I0910 14:03:58.350212    3495 start_flags.go:321] config:
	{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-362000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:03:58.353968    3495 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:03:58.361139    3495 out.go:177] * Starting control plane node multinode-362000 in cluster multinode-362000
	I0910 14:03:58.365287    3495 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:03:58.365319    3495 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:03:58.365334    3495 cache.go:57] Caching tarball of preloaded images
	I0910 14:03:58.365422    3495 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:03:58.365428    3495 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:03:58.365485    3495 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/multinode-362000/config.json ...
	I0910 14:03:58.365833    3495 start.go:365] acquiring machines lock for multinode-362000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:03:58.365865    3495 start.go:369] acquired machines lock for "multinode-362000" in 24.959µs
	I0910 14:03:58.365875    3495 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:03:58.365880    3495 fix.go:54] fixHost starting: 
	I0910 14:03:58.365998    3495 fix.go:102] recreateIfNeeded on multinode-362000: state=Stopped err=<nil>
	W0910 14:03:58.366006    3495 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:03:58.373332    3495 out.go:177] * Restarting existing qemu2 VM for "multinode-362000" ...
	I0910 14:03:58.377376    3495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b9:bf:c5:2e:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:03:58.379485    3495 main.go:141] libmachine: STDOUT: 
	I0910 14:03:58.379502    3495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:03:58.379530    3495 fix.go:56] fixHost completed within 13.649ms
	I0910 14:03:58.379536    3495 start.go:83] releasing machines lock for "multinode-362000", held for 13.666833ms
	W0910 14:03:58.379542    3495 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:03:58.379583    3495 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:03:58.379587    3495 start.go:687] Will try again in 5 seconds ...
	I0910 14:04:03.381687    3495 start.go:365] acquiring machines lock for multinode-362000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:04:03.382137    3495 start.go:369] acquired machines lock for "multinode-362000" in 370.416µs
	I0910 14:04:03.382313    3495 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:04:03.382333    3495 fix.go:54] fixHost starting: 
	I0910 14:04:03.383146    3495 fix.go:102] recreateIfNeeded on multinode-362000: state=Stopped err=<nil>
	W0910 14:04:03.383172    3495 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:04:03.391597    3495 out.go:177] * Restarting existing qemu2 VM for "multinode-362000" ...
	I0910 14:04:03.395821    3495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b9:bf:c5:2e:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:04:03.404770    3495 main.go:141] libmachine: STDOUT: 
	I0910 14:04:03.404823    3495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:04:03.404895    3495 fix.go:56] fixHost completed within 22.564875ms
	I0910 14:04:03.404913    3495 start.go:83] releasing machines lock for "multinode-362000", held for 22.754ms
	W0910 14:04:03.405096    3495 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-362000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-362000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:04:03.410632    3495 out.go:177] 
	W0910 14:04:03.414768    3495 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:04:03.414821    3495 out.go:239] * 
	* 
	W0910 14:04:03.417323    3495 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:04:03.424656    3495 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-362000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-362000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (32.937208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 node delete m03: exit status 89 (39.373166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-362000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-362000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr: exit status 7 (28.99225ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:04:03.604246    3509 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:04:03.604390    3509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:03.604393    3509 out.go:309] Setting ErrFile to fd 2...
	I0910 14:04:03.604395    3509 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:03.604510    3509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:04:03.604626    3509 out.go:303] Setting JSON to false
	I0910 14:04:03.604640    3509 mustload.go:65] Loading cluster: multinode-362000
	I0910 14:04:03.604700    3509 notify.go:220] Checking for updates...
	I0910 14:04:03.604810    3509 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:04:03.604815    3509 status.go:255] checking status of multinode-362000 ...
	I0910 14:04:03.605009    3509 status.go:330] multinode-362000 host status = "Stopped" (err=<nil>)
	I0910 14:04:03.605012    3509 status.go:343] host is not running, skipping remaining checks
	I0910 14:04:03.605014    3509 status.go:257] multinode-362000 status: &{Name:multinode-362000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.160542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 status: exit status 7 (29.535209ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr: exit status 7 (28.978208ms)

                                                
                                                
-- stdout --
	multinode-362000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:04:03.753901    3517 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:04:03.754032    3517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:03.754034    3517 out.go:309] Setting ErrFile to fd 2...
	I0910 14:04:03.754037    3517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:03.754151    3517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:04:03.754257    3517 out.go:303] Setting JSON to false
	I0910 14:04:03.754273    3517 mustload.go:65] Loading cluster: multinode-362000
	I0910 14:04:03.754314    3517 notify.go:220] Checking for updates...
	I0910 14:04:03.754456    3517 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:04:03.754461    3517 status.go:255] checking status of multinode-362000 ...
	I0910 14:04:03.754649    3517 status.go:330] multinode-362000 host status = "Stopped" (err=<nil>)
	I0910 14:04:03.754652    3517 status.go:343] host is not running, skipping remaining checks
	I0910 14:04:03.754654    3517 status.go:257] multinode-362000 status: &{Name:multinode-362000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr": multinode-362000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-362000 status --alsologtostderr": multinode-362000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (29.833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178546666s)

                                                
                                                
-- stdout --
	* [multinode-362000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-362000 in cluster multinode-362000
	* Restarting existing qemu2 VM for "multinode-362000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-362000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:04:03.813219    3521 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:04:03.813551    3521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:03.813554    3521 out.go:309] Setting ErrFile to fd 2...
	I0910 14:04:03.813557    3521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:03.813689    3521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:04:03.814865    3521 out.go:303] Setting JSON to false
	I0910 14:04:03.830220    3521 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2018,"bootTime":1694377825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:04:03.830299    3521 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:04:03.835004    3521 out.go:177] * [multinode-362000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:04:03.841943    3521 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:04:03.841979    3521 notify.go:220] Checking for updates...
	I0910 14:04:03.846002    3521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:04:03.849970    3521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:04:03.852942    3521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:04:03.855964    3521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:04:03.859001    3521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:04:03.862253    3521 config.go:182] Loaded profile config "multinode-362000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:04:03.862511    3521 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:04:03.866921    3521 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:04:03.873949    3521 start.go:298] selected driver: qemu2
	I0910 14:04:03.873957    3521 start.go:902] validating driver "qemu2" against &{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-362000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:04:03.874017    3521 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:04:03.875966    3521 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:04:03.875987    3521 cni.go:84] Creating CNI manager for ""
	I0910 14:04:03.875991    3521 cni.go:136] 1 nodes found, recommending kindnet
	I0910 14:04:03.875997    3521 start_flags.go:321] config:
	{Name:multinode-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-362000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:04:03.879939    3521 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:03.886832    3521 out.go:177] * Starting control plane node multinode-362000 in cluster multinode-362000
	I0910 14:04:03.890969    3521 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:04:03.890987    3521 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:04:03.891009    3521 cache.go:57] Caching tarball of preloaded images
	I0910 14:04:03.891065    3521 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:04:03.891071    3521 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:04:03.891144    3521 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/multinode-362000/config.json ...
	I0910 14:04:03.891515    3521 start.go:365] acquiring machines lock for multinode-362000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:04:03.891541    3521 start.go:369] acquired machines lock for "multinode-362000" in 20.375µs
	I0910 14:04:03.891550    3521 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:04:03.891553    3521 fix.go:54] fixHost starting: 
	I0910 14:04:03.891673    3521 fix.go:102] recreateIfNeeded on multinode-362000: state=Stopped err=<nil>
	W0910 14:04:03.891680    3521 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:04:03.899939    3521 out.go:177] * Restarting existing qemu2 VM for "multinode-362000" ...
	I0910 14:04:03.904043    3521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b9:bf:c5:2e:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:04:03.905886    3521 main.go:141] libmachine: STDOUT: 
	I0910 14:04:03.905903    3521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:04:03.905933    3521 fix.go:56] fixHost completed within 14.377125ms
	I0910 14:04:03.905938    3521 start.go:83] releasing machines lock for "multinode-362000", held for 14.393334ms
	W0910 14:04:03.905945    3521 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:04:03.905987    3521 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:04:03.905992    3521 start.go:687] Will try again in 5 seconds ...
	I0910 14:04:08.908061    3521 start.go:365] acquiring machines lock for multinode-362000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:04:08.908445    3521 start.go:369] acquired machines lock for "multinode-362000" in 300.5µs
	I0910 14:04:08.908537    3521 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:04:08.908556    3521 fix.go:54] fixHost starting: 
	I0910 14:04:08.909368    3521 fix.go:102] recreateIfNeeded on multinode-362000: state=Stopped err=<nil>
	W0910 14:04:08.909391    3521 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:04:08.916717    3521 out.go:177] * Restarting existing qemu2 VM for "multinode-362000" ...
	I0910 14:04:08.920811    3521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b9:bf:c5:2e:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/multinode-362000/disk.qcow2
	I0910 14:04:08.928974    3521 main.go:141] libmachine: STDOUT: 
	I0910 14:04:08.929040    3521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:04:08.929126    3521 fix.go:56] fixHost completed within 20.567583ms
	I0910 14:04:08.929150    3521 start.go:83] releasing machines lock for "multinode-362000", held for 20.683833ms
	W0910 14:04:08.929464    3521 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-362000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-362000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:04:08.936466    3521 out.go:177] 
	W0910 14:04:08.940794    3521 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:04:08.940821    3521 out.go:239] * 
	* 
	W0910 14:04:08.943285    3521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:04:08.951651    3521 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-362000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (67.424292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-362000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-362000-m01 --driver=qemu2 
E0910 14:04:15.001824    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:15.008249    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:15.019358    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:15.041461    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:15.083534    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:15.165628    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:15.327738    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:15.649931    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:16.292253    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:17.574563    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-362000-m01 --driver=qemu2 : exit status 80 (9.850227208s)

                                                
                                                
-- stdout --
	* [multinode-362000-m01] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-362000-m01 in cluster multinode-362000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-362000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-362000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-362000-m02 --driver=qemu2 
E0910 14:04:20.137231    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
E0910 14:04:25.259764    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-362000-m02 --driver=qemu2 : exit status 80 (9.822655916s)

                                                
                                                
-- stdout --
	* [multinode-362000-m02] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-362000-m02 in cluster multinode-362000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-362000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-362000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-362000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-362000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-362000: exit status 89 (78.327333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-362000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-362000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-362000 -n multinode-362000: exit status 7 (30.166292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.92s)

                                                
                                    
x
+
TestPreload (9.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-404000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0910 14:04:35.500693    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-404000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.747127208s)

                                                
                                                
-- stdout --
	* [test-preload-404000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-404000 in cluster test-preload-404000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:04:29.106698    3581 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:04:29.106803    3581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:29.106806    3581 out.go:309] Setting ErrFile to fd 2...
	I0910 14:04:29.106808    3581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:04:29.106916    3581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:04:29.107935    3581 out.go:303] Setting JSON to false
	I0910 14:04:29.122974    3581 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2044,"bootTime":1694377825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:04:29.123059    3581 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:04:29.128572    3581 out.go:177] * [test-preload-404000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:04:29.136504    3581 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:04:29.136569    3581 notify.go:220] Checking for updates...
	I0910 14:04:29.140370    3581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:04:29.143443    3581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:04:29.146542    3581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:04:29.150288    3581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:04:29.153522    3581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:04:29.156864    3581 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:04:29.156920    3581 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:04:29.161315    3581 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:04:29.168566    3581 start.go:298] selected driver: qemu2
	I0910 14:04:29.168575    3581 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:04:29.168584    3581 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:04:29.170525    3581 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:04:29.173507    3581 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:04:29.176864    3581 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:04:29.176930    3581 cni.go:84] Creating CNI manager for ""
	I0910 14:04:29.176945    3581 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:04:29.176954    3581 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:04:29.176979    3581 start_flags.go:321] config:
	{Name:test-preload-404000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-404000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:04:29.181655    3581 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.188504    3581 out.go:177] * Starting control plane node test-preload-404000 in cluster test-preload-404000
	I0910 14:04:29.192508    3581 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0910 14:04:29.192579    3581 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/test-preload-404000/config.json ...
	I0910 14:04:29.192596    3581 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/test-preload-404000/config.json: {Name:mk6bbdeff14c2047debd1553a9648a69444d61ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:04:29.192615    3581 cache.go:107] acquiring lock: {Name:mk54fafb2c8726195c146a28b31a05730133ba38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192615    3581 cache.go:107] acquiring lock: {Name:mk8c3f3ae9c15537bb18cea6df123126f6cf8677 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192661    3581 cache.go:107] acquiring lock: {Name:mkab31cba64201a9f7cce276e41fce57594c9d04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192815    3581 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0910 14:04:29.192819    3581 cache.go:107] acquiring lock: {Name:mk2b41acee7d8115329beb9cb8f6c535dfd07367 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192841    3581 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0910 14:04:29.192831    3581 cache.go:107] acquiring lock: {Name:mkbb43f6bb571675fc2e654575ca9285a6e8c5d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192851    3581 cache.go:107] acquiring lock: {Name:mkbd4e5447eec1d123563a6b8cd1952001b51ce0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192894    3581 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 14:04:29.192916    3581 cache.go:107] acquiring lock: {Name:mk3c500b3a6acf966ef4737660415b34af4532a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192924    3581 cache.go:107] acquiring lock: {Name:mke8eae67aa355716f54c25794c0585b4b852367 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:04:29.192945    3581 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 14:04:29.193010    3581 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0910 14:04:29.193024    3581 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0910 14:04:29.193075    3581 start.go:365] acquiring machines lock for test-preload-404000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:04:29.193118    3581 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0910 14:04:29.193129    3581 start.go:369] acquired machines lock for "test-preload-404000" in 42.875µs
	I0910 14:04:29.193138    3581 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0910 14:04:29.193142    3581 start.go:93] Provisioning new machine with config: &{Name:test-preload-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-404000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:04:29.193196    3581 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:04:29.201518    3581 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:04:29.208407    3581 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0910 14:04:29.208449    3581 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0910 14:04:29.208491    3581 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0910 14:04:29.209054    3581 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0910 14:04:29.209133    3581 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0910 14:04:29.209172    3581 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0910 14:04:29.209208    3581 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0910 14:04:29.209204    3581 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0910 14:04:29.217569    3581 start.go:159] libmachine.API.Create for "test-preload-404000" (driver="qemu2")
	I0910 14:04:29.217631    3581 client.go:168] LocalClient.Create starting
	I0910 14:04:29.217698    3581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:04:29.217738    3581 main.go:141] libmachine: Decoding PEM data...
	I0910 14:04:29.217752    3581 main.go:141] libmachine: Parsing certificate...
	I0910 14:04:29.217799    3581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:04:29.217819    3581 main.go:141] libmachine: Decoding PEM data...
	I0910 14:04:29.217826    3581 main.go:141] libmachine: Parsing certificate...
	I0910 14:04:29.218149    3581 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:04:29.350620    3581 main.go:141] libmachine: Creating SSH key...
	I0910 14:04:29.417381    3581 main.go:141] libmachine: Creating Disk image...
	I0910 14:04:29.417392    3581 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:04:29.417539    3581 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2
	I0910 14:04:29.426416    3581 main.go:141] libmachine: STDOUT: 
	I0910 14:04:29.426445    3581 main.go:141] libmachine: STDERR: 
	I0910 14:04:29.426521    3581 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2 +20000M
	I0910 14:04:29.434366    3581 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:04:29.434386    3581 main.go:141] libmachine: STDERR: 
	I0910 14:04:29.434404    3581 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2
	I0910 14:04:29.434409    3581 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:04:29.434440    3581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4d:a2:59:14:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2
	I0910 14:04:29.435977    3581 main.go:141] libmachine: STDOUT: 
	I0910 14:04:29.435990    3581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:04:29.436009    3581 client.go:171] LocalClient.Create took 218.372625ms
	I0910 14:04:29.819339    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0910 14:04:29.976577    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0910 14:04:30.156960    3581 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0910 14:04:30.156996    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0910 14:04:30.309048    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0910 14:04:30.589717    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0910 14:04:30.722420    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0910 14:04:30.722449    3581 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.529674833s
	I0910 14:04:30.722463    3581 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0910 14:04:30.956332    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0910 14:04:31.135911    3581 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0910 14:04:31.135937    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0910 14:04:31.219867    3581 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0910 14:04:31.350238    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0910 14:04:31.350307    3581 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.157695708s
	I0910 14:04:31.350329    3581 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0910 14:04:31.436280    3581 start.go:128] duration metric: createHost completed in 2.243059041s
	I0910 14:04:31.436320    3581 start.go:83] releasing machines lock for "test-preload-404000", held for 2.243185792s
	W0910 14:04:31.436373    3581 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:04:31.445755    3581 out.go:177] * Deleting "test-preload-404000" in qemu2 ...
	W0910 14:04:31.464688    3581 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:04:31.464730    3581 start.go:687] Will try again in 5 seconds ...
	I0910 14:04:32.297143    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0910 14:04:32.297206    3581 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.104388s
	I0910 14:04:32.297239    3581 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0910 14:04:33.263207    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0910 14:04:33.263250    3581 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.070338166s
	I0910 14:04:33.263276    3581 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0910 14:04:33.665690    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0910 14:04:33.665744    3581 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.473143542s
	I0910 14:04:33.665807    3581 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0910 14:04:34.149321    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0910 14:04:34.149387    3581 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.956772541s
	I0910 14:04:34.149416    3581 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0910 14:04:36.119603    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0910 14:04:36.119659    3581 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.926744542s
	I0910 14:04:36.119688    3581 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0910 14:04:36.464992    3581 start.go:365] acquiring machines lock for test-preload-404000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:04:36.465498    3581 start.go:369] acquired machines lock for "test-preload-404000" in 422µs
	I0910 14:04:36.465619    3581 start.go:93] Provisioning new machine with config: &{Name:test-preload-404000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-404000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:04:36.465858    3581 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:04:36.473127    3581 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:04:36.522430    3581 start.go:159] libmachine.API.Create for "test-preload-404000" (driver="qemu2")
	I0910 14:04:36.522468    3581 client.go:168] LocalClient.Create starting
	I0910 14:04:36.522585    3581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:04:36.522644    3581 main.go:141] libmachine: Decoding PEM data...
	I0910 14:04:36.522669    3581 main.go:141] libmachine: Parsing certificate...
	I0910 14:04:36.522754    3581 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:04:36.522789    3581 main.go:141] libmachine: Decoding PEM data...
	I0910 14:04:36.522807    3581 main.go:141] libmachine: Parsing certificate...
	I0910 14:04:36.523295    3581 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:04:36.650416    3581 main.go:141] libmachine: Creating SSH key...
	I0910 14:04:36.766139    3581 main.go:141] libmachine: Creating Disk image...
	I0910 14:04:36.766145    3581 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:04:36.766286    3581 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2
	I0910 14:04:36.774756    3581 main.go:141] libmachine: STDOUT: 
	I0910 14:04:36.774771    3581 main.go:141] libmachine: STDERR: 
	I0910 14:04:36.774819    3581 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2 +20000M
	I0910 14:04:36.782154    3581 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:04:36.782172    3581 main.go:141] libmachine: STDERR: 
	I0910 14:04:36.782183    3581 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2
	I0910 14:04:36.782191    3581 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:04:36.782239    3581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:b0:86:b4:b4:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/test-preload-404000/disk.qcow2
	I0910 14:04:36.783799    3581 main.go:141] libmachine: STDOUT: 
	I0910 14:04:36.783813    3581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:04:36.783826    3581 client.go:171] LocalClient.Create took 261.354709ms
	I0910 14:04:38.717674    3581 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0910 14:04:38.717735    3581 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.524955958s
	I0910 14:04:38.717765    3581 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0910 14:04:38.717837    3581 cache.go:87] Successfully saved all images to host disk.
	I0910 14:04:38.784275    3581 start.go:128] duration metric: createHost completed in 2.318363167s
	I0910 14:04:38.784314    3581 start.go:83] releasing machines lock for "test-preload-404000", held for 2.318795917s
	W0910 14:04:38.784581    3581 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:04:38.795894    3581 out.go:177] 
	W0910 14:04:38.800030    3581 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:04:38.800057    3581 out.go:239] * 
	* 
	W0910 14:04:38.802608    3581 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:04:38.812991    3581 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-404000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-09-10 14:04:38.828736 -0700 PDT m=+751.512786501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-404000 -n test-preload-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-404000 -n test-preload-404000: exit status 7 (67.944791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-404000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-404000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-404000
--- FAIL: TestPreload (9.92s)

                                                
                                    
x
+
TestScheduledStopUnix (9.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-591000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-591000 --memory=2048 --driver=qemu2 : exit status 80 (9.707936625s)

                                                
                                                
-- stdout --
	* [scheduled-stop-591000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-591000 in cluster scheduled-stop-591000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-591000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-591000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-591000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-591000 in cluster scheduled-stop-591000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-591000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-591000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-09-10 14:04:48.702309 -0700 PDT m=+761.386380293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-591000 -n scheduled-stop-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-591000 -n scheduled-stop-591000: exit status 7 (67.789958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-591000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-591000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-591000
--- FAIL: TestScheduledStopUnix (9.88s)

                                                
                                    
x
+
TestSkaffold (11.85s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3476203163 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-095000 --memory=2600 --driver=qemu2 
E0910 14:04:55.983271    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-095000 --memory=2600 --driver=qemu2 : exit status 80 (9.737371792s)

                                                
                                                
-- stdout --
	* [skaffold-095000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-095000 in cluster skaffold-095000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-095000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-095000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-095000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-095000 in cluster skaffold-095000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-095000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-095000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-09-10 14:05:00.554906 -0700 PDT m=+773.239002460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-095000 -n skaffold-095000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-095000 -n skaffold-095000: exit status 7 (62.628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-095000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-095000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-095000
--- FAIL: TestSkaffold (11.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (161.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-10 14:08:21.753641 -0700 PDT m=+974.438170293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-134000 -n running-upgrade-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-134000 -n running-upgrade-134000: exit status 85 (83.594916ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-134000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-134000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-134000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-134000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-134000\"")
helpers_test.go:175: Cleaning up "running-upgrade-134000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-134000
--- FAIL: TestRunningBinaryUpgrade (161.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-429000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-429000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.741622791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-429000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-429000 in cluster kubernetes-upgrade-429000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-429000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:08:22.107654    4082 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:08:22.107818    4082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:08:22.107821    4082 out.go:309] Setting ErrFile to fd 2...
	I0910 14:08:22.107824    4082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:08:22.107937    4082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:08:22.108921    4082 out.go:303] Setting JSON to false
	I0910 14:08:22.124016    4082 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2277,"bootTime":1694377825,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:08:22.124081    4082 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:08:22.128588    4082 out.go:177] * [kubernetes-upgrade-429000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:08:22.135496    4082 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:08:22.139538    4082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:08:22.135555    4082 notify.go:220] Checking for updates...
	I0910 14:08:22.143456    4082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:08:22.146492    4082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:08:22.149520    4082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:08:22.152578    4082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:08:22.155741    4082 config.go:182] Loaded profile config "cert-expiration-225000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:08:22.155811    4082 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:08:22.155856    4082 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:08:22.160544    4082 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:08:22.167480    4082 start.go:298] selected driver: qemu2
	I0910 14:08:22.167487    4082 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:08:22.167492    4082 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:08:22.169311    4082 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:08:22.172500    4082 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:08:22.175537    4082 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 14:08:22.175555    4082 cni.go:84] Creating CNI manager for ""
	I0910 14:08:22.175561    4082 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 14:08:22.175565    4082 start_flags.go:321] config:
	{Name:kubernetes-upgrade-429000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-429000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:08:22.179500    4082 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:08:22.186351    4082 out.go:177] * Starting control plane node kubernetes-upgrade-429000 in cluster kubernetes-upgrade-429000
	I0910 14:08:22.190515    4082 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 14:08:22.190534    4082 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 14:08:22.190550    4082 cache.go:57] Caching tarball of preloaded images
	I0910 14:08:22.190612    4082 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:08:22.190618    4082 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0910 14:08:22.190686    4082 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/kubernetes-upgrade-429000/config.json ...
	I0910 14:08:22.190698    4082 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/kubernetes-upgrade-429000/config.json: {Name:mk821c8e2209ae6889e7cc4385ceede5771fe926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:08:22.190906    4082 start.go:365] acquiring machines lock for kubernetes-upgrade-429000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:08:22.190937    4082 start.go:369] acquired machines lock for "kubernetes-upgrade-429000" in 21.958µs
	I0910 14:08:22.190948    4082 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-429000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:08:22.190972    4082 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:08:22.198508    4082 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:08:22.213474    4082 start.go:159] libmachine.API.Create for "kubernetes-upgrade-429000" (driver="qemu2")
	I0910 14:08:22.213492    4082 client.go:168] LocalClient.Create starting
	I0910 14:08:22.213545    4082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:08:22.213576    4082 main.go:141] libmachine: Decoding PEM data...
	I0910 14:08:22.213586    4082 main.go:141] libmachine: Parsing certificate...
	I0910 14:08:22.213623    4082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:08:22.213641    4082 main.go:141] libmachine: Decoding PEM data...
	I0910 14:08:22.213650    4082 main.go:141] libmachine: Parsing certificate...
	I0910 14:08:22.213979    4082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:08:22.330788    4082 main.go:141] libmachine: Creating SSH key...
	I0910 14:08:22.482688    4082 main.go:141] libmachine: Creating Disk image...
	I0910 14:08:22.482695    4082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:08:22.482858    4082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:22.491775    4082 main.go:141] libmachine: STDOUT: 
	I0910 14:08:22.491788    4082 main.go:141] libmachine: STDERR: 
	I0910 14:08:22.491849    4082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2 +20000M
	I0910 14:08:22.499085    4082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:08:22.499097    4082 main.go:141] libmachine: STDERR: 
	I0910 14:08:22.499109    4082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:22.499116    4082 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:08:22.499152    4082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ab:8b:5d:b7:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:22.500681    4082 main.go:141] libmachine: STDOUT: 
	I0910 14:08:22.500709    4082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:08:22.500730    4082 client.go:171] LocalClient.Create took 287.233125ms
	I0910 14:08:24.502877    4082 start.go:128] duration metric: createHost completed in 2.311892875s
	I0910 14:08:24.502970    4082 start.go:83] releasing machines lock for "kubernetes-upgrade-429000", held for 2.312000458s
	W0910 14:08:24.503087    4082 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:08:24.511404    4082 out.go:177] * Deleting "kubernetes-upgrade-429000" in qemu2 ...
	W0910 14:08:24.531992    4082 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:08:24.532026    4082 start.go:687] Will try again in 5 seconds ...
	I0910 14:08:29.534275    4082 start.go:365] acquiring machines lock for kubernetes-upgrade-429000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:08:29.534749    4082 start.go:369] acquired machines lock for "kubernetes-upgrade-429000" in 367.333µs
	I0910 14:08:29.535208    4082 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-429000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:08:29.535518    4082 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:08:29.544129    4082 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:08:29.591293    4082 start.go:159] libmachine.API.Create for "kubernetes-upgrade-429000" (driver="qemu2")
	I0910 14:08:29.591332    4082 client.go:168] LocalClient.Create starting
	I0910 14:08:29.591436    4082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:08:29.591489    4082 main.go:141] libmachine: Decoding PEM data...
	I0910 14:08:29.591519    4082 main.go:141] libmachine: Parsing certificate...
	I0910 14:08:29.591585    4082 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:08:29.591621    4082 main.go:141] libmachine: Decoding PEM data...
	I0910 14:08:29.591637    4082 main.go:141] libmachine: Parsing certificate...
	I0910 14:08:29.592206    4082 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:08:29.721267    4082 main.go:141] libmachine: Creating SSH key...
	I0910 14:08:29.761068    4082 main.go:141] libmachine: Creating Disk image...
	I0910 14:08:29.761073    4082 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:08:29.761242    4082 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:29.769722    4082 main.go:141] libmachine: STDOUT: 
	I0910 14:08:29.769737    4082 main.go:141] libmachine: STDERR: 
	I0910 14:08:29.769800    4082 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2 +20000M
	I0910 14:08:29.776920    4082 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:08:29.776934    4082 main.go:141] libmachine: STDERR: 
	I0910 14:08:29.776945    4082 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:29.776952    4082 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:08:29.777010    4082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8b:ff:72:3d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:29.778555    4082 main.go:141] libmachine: STDOUT: 
	I0910 14:08:29.778569    4082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:08:29.778582    4082 client.go:171] LocalClient.Create took 187.243042ms
	I0910 14:08:31.780738    4082 start.go:128] duration metric: createHost completed in 2.245201625s
	I0910 14:08:31.780830    4082 start.go:83] releasing machines lock for "kubernetes-upgrade-429000", held for 2.246029084s
	W0910 14:08:31.781331    4082 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-429000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:08:31.791067    4082 out.go:177] 
	W0910 14:08:31.796003    4082 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:08:31.796068    4082 out.go:239] * 
	* 
	W0910 14:08:31.798533    4082 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:08:31.807752    4082 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-429000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-429000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-429000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-429000 status --format={{.Host}}: exit status 7 (36.14025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-429000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-429000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.183628208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-429000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-429000 in cluster kubernetes-upgrade-429000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-429000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-429000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:08:31.990945    4100 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:08:31.991063    4100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:08:31.991066    4100 out.go:309] Setting ErrFile to fd 2...
	I0910 14:08:31.991068    4100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:08:31.991185    4100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:08:31.992126    4100 out.go:303] Setting JSON to false
	I0910 14:08:32.007168    4100 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2286,"bootTime":1694377825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:08:32.007237    4100 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:08:32.010870    4100 out.go:177] * [kubernetes-upgrade-429000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:08:32.021645    4100 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:08:32.025784    4100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:08:32.021729    4100 notify.go:220] Checking for updates...
	I0910 14:08:32.030214    4100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:08:32.036783    4100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:08:32.039793    4100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:08:32.042813    4100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:08:32.047115    4100 config.go:182] Loaded profile config "kubernetes-upgrade-429000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0910 14:08:32.047387    4100 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:08:32.051771    4100 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:08:32.058825    4100 start.go:298] selected driver: qemu2
	I0910 14:08:32.058832    4100 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-429000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:08:32.058904    4100 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:08:32.060936    4100 cni.go:84] Creating CNI manager for ""
	I0910 14:08:32.060950    4100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:08:32.060958    4100 start_flags.go:321] config:
	{Name:kubernetes-upgrade-429000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-429000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:08:32.065010    4100 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:08:32.071810    4100 out.go:177] * Starting control plane node kubernetes-upgrade-429000 in cluster kubernetes-upgrade-429000
	I0910 14:08:32.075599    4100 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:08:32.075616    4100 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:08:32.075628    4100 cache.go:57] Caching tarball of preloaded images
	I0910 14:08:32.075679    4100 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:08:32.075684    4100 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:08:32.075737    4100 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/kubernetes-upgrade-429000/config.json ...
	I0910 14:08:32.076063    4100 start.go:365] acquiring machines lock for kubernetes-upgrade-429000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:08:32.076092    4100 start.go:369] acquired machines lock for "kubernetes-upgrade-429000" in 23.333µs
	I0910 14:08:32.076102    4100 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:08:32.076107    4100 fix.go:54] fixHost starting: 
	I0910 14:08:32.076225    4100 fix.go:102] recreateIfNeeded on kubernetes-upgrade-429000: state=Stopped err=<nil>
	W0910 14:08:32.076233    4100 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:08:32.083775    4100 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-429000" ...
	I0910 14:08:32.086820    4100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8b:ff:72:3d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:32.088737    4100 main.go:141] libmachine: STDOUT: 
	I0910 14:08:32.088755    4100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:08:32.088792    4100 fix.go:56] fixHost completed within 12.683458ms
	I0910 14:08:32.088797    4100 start.go:83] releasing machines lock for "kubernetes-upgrade-429000", held for 12.700959ms
	W0910 14:08:32.088805    4100 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:08:32.088850    4100 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:08:32.088855    4100 start.go:687] Will try again in 5 seconds ...
	I0910 14:08:37.091145    4100 start.go:365] acquiring machines lock for kubernetes-upgrade-429000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:08:37.091505    4100 start.go:369] acquired machines lock for "kubernetes-upgrade-429000" in 257.792µs
	I0910 14:08:37.091645    4100 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:08:37.091662    4100 fix.go:54] fixHost starting: 
	I0910 14:08:37.092386    4100 fix.go:102] recreateIfNeeded on kubernetes-upgrade-429000: state=Stopped err=<nil>
	W0910 14:08:37.092413    4100 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:08:37.097027    4100 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-429000" ...
	I0910 14:08:37.104296    4100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8b:ff:72:3d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubernetes-upgrade-429000/disk.qcow2
	I0910 14:08:37.112826    4100 main.go:141] libmachine: STDOUT: 
	I0910 14:08:37.112887    4100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:08:37.112997    4100 fix.go:56] fixHost completed within 21.329708ms
	I0910 14:08:37.113020    4100 start.go:83] releasing machines lock for "kubernetes-upgrade-429000", held for 21.492792ms
	W0910 14:08:37.113193    4100 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-429000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-429000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:08:37.121026    4100 out.go:177] 
	W0910 14:08:37.124145    4100 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:08:37.124168    4100 out.go:239] * 
	* 
	W0910 14:08:37.126545    4100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:08:37.134004    4100 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-429000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-429000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-429000 version --output=json: exit status 1 (63.979416ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-429000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-09-10 14:08:37.212479 -0700 PDT m=+989.897040543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-429000 -n kubernetes-upgrade-429000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-429000 -n kubernetes-upgrade-429000: exit status 7 (32.353208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-429000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-429000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-429000
--- FAIL: TestKubernetesUpgrade (15.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17207
- KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4261805596/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17207
- KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current643698624/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (145.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (145.49s)

                                                
                                    
x
+
TestPause/serial/Start (9.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-220000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-220000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.878338667s)

                                                
                                                
-- stdout --
	* [pause-220000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-220000 in cluster pause-220000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-220000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-220000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-220000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-220000 -n pause-220000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-220000 -n pause-220000: exit status 7 (68.231791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-220000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-235000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-235000 --driver=qemu2 : exit status 80 (9.697784333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-235000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-235000 in cluster NoKubernetes-235000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-235000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-235000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-235000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000: exit status 7 (69.784333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244373833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-235000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-235000
	* Restarting existing qemu2 VM for "NoKubernetes-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-235000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000: exit status 7 (70.87125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239836834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-235000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-235000
	* Restarting existing qemu2 VM for "NoKubernetes-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-235000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000: exit status 7 (68.269209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-235000 --driver=qemu2 
E0910 14:09:15.001119    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-235000 --driver=qemu2 : exit status 80 (5.236542166s)

                                                
                                                
-- stdout --
	* [NoKubernetes-235000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-235000
	* Restarting existing qemu2 VM for "NoKubernetes-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-235000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-235000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-235000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-235000 -n NoKubernetes-235000: exit status 7 (69.159084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-235000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.797041958s)

                                                
                                                
-- stdout --
	* [auto-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-322000 in cluster auto-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:09:17.133545    4220 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:09:17.133663    4220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:17.133666    4220 out.go:309] Setting ErrFile to fd 2...
	I0910 14:09:17.133668    4220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:17.133785    4220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:09:17.134817    4220 out.go:303] Setting JSON to false
	I0910 14:09:17.149884    4220 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2332,"bootTime":1694377825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:09:17.149970    4220 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:09:17.154842    4220 out.go:177] * [auto-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:09:17.162889    4220 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:09:17.162976    4220 notify.go:220] Checking for updates...
	I0910 14:09:17.166883    4220 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:09:17.169897    4220 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:09:17.172851    4220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:09:17.175807    4220 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:09:17.178900    4220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:09:17.182154    4220 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:09:17.182194    4220 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:09:17.186835    4220 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:09:17.193876    4220 start.go:298] selected driver: qemu2
	I0910 14:09:17.193884    4220 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:09:17.193890    4220 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:09:17.195966    4220 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:09:17.198859    4220 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:09:17.201903    4220 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:09:17.201922    4220 cni.go:84] Creating CNI manager for ""
	I0910 14:09:17.201927    4220 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:09:17.201931    4220 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:09:17.201935    4220 start_flags.go:321] config:
	{Name:auto-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0910 14:09:17.205991    4220 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:09:17.209708    4220 out.go:177] * Starting control plane node auto-322000 in cluster auto-322000
	I0910 14:09:17.217871    4220 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:09:17.217894    4220 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:09:17.217915    4220 cache.go:57] Caching tarball of preloaded images
	I0910 14:09:17.217998    4220 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:09:17.218005    4220 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:09:17.218075    4220 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/auto-322000/config.json ...
	I0910 14:09:17.218087    4220 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/auto-322000/config.json: {Name:mk6b6083f7d025f21af16265215a552373f760f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:09:17.218290    4220 start.go:365] acquiring machines lock for auto-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:09:17.218317    4220 start.go:369] acquired machines lock for "auto-322000" in 22.208µs
	I0910 14:09:17.218327    4220 start.go:93] Provisioning new machine with config: &{Name:auto-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:09:17.218363    4220 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:09:17.226886    4220 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:09:17.242605    4220 start.go:159] libmachine.API.Create for "auto-322000" (driver="qemu2")
	I0910 14:09:17.242639    4220 client.go:168] LocalClient.Create starting
	I0910 14:09:17.242723    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:09:17.242764    4220 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:17.242777    4220 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:17.242817    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:09:17.242837    4220 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:17.242844    4220 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:17.243228    4220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:09:17.358830    4220 main.go:141] libmachine: Creating SSH key...
	I0910 14:09:17.452572    4220 main.go:141] libmachine: Creating Disk image...
	I0910 14:09:17.452578    4220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:09:17.452703    4220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2
	I0910 14:09:17.461138    4220 main.go:141] libmachine: STDOUT: 
	I0910 14:09:17.461154    4220 main.go:141] libmachine: STDERR: 
	I0910 14:09:17.461208    4220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2 +20000M
	I0910 14:09:17.468396    4220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:09:17.468410    4220 main.go:141] libmachine: STDERR: 
	I0910 14:09:17.468427    4220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2
	I0910 14:09:17.468435    4220 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:09:17.468478    4220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:f2:d7:cc:58:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2
	I0910 14:09:17.470011    4220 main.go:141] libmachine: STDOUT: 
	I0910 14:09:17.470025    4220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:09:17.470043    4220 client.go:171] LocalClient.Create took 227.397833ms
	I0910 14:09:19.472199    4220 start.go:128] duration metric: createHost completed in 2.253823333s
	I0910 14:09:19.472513    4220 start.go:83] releasing machines lock for "auto-322000", held for 2.254190583s
	W0910 14:09:19.472568    4220 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:19.480888    4220 out.go:177] * Deleting "auto-322000" in qemu2 ...
	W0910 14:09:19.500688    4220 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:19.500721    4220 start.go:687] Will try again in 5 seconds ...
	I0910 14:09:24.500984    4220 start.go:365] acquiring machines lock for auto-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:09:24.501475    4220 start.go:369] acquired machines lock for "auto-322000" in 367.375µs
	I0910 14:09:24.501612    4220 start.go:93] Provisioning new machine with config: &{Name:auto-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:09:24.501955    4220 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:09:24.507707    4220 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:09:24.553720    4220 start.go:159] libmachine.API.Create for "auto-322000" (driver="qemu2")
	I0910 14:09:24.553761    4220 client.go:168] LocalClient.Create starting
	I0910 14:09:24.553864    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:09:24.553932    4220 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:24.553958    4220 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:24.554024    4220 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:09:24.554060    4220 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:24.554072    4220 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:24.554557    4220 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:09:24.680522    4220 main.go:141] libmachine: Creating SSH key...
	I0910 14:09:24.842158    4220 main.go:141] libmachine: Creating Disk image...
	I0910 14:09:24.842164    4220 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:09:24.842321    4220 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2
	I0910 14:09:24.851226    4220 main.go:141] libmachine: STDOUT: 
	I0910 14:09:24.851244    4220 main.go:141] libmachine: STDERR: 
	I0910 14:09:24.851299    4220 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2 +20000M
	I0910 14:09:24.858435    4220 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:09:24.858447    4220 main.go:141] libmachine: STDERR: 
	I0910 14:09:24.858465    4220 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2
	I0910 14:09:24.858474    4220 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:09:24.858506    4220 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:1f:5e:57:d9:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/auto-322000/disk.qcow2
	I0910 14:09:24.859948    4220 main.go:141] libmachine: STDOUT: 
	I0910 14:09:24.859961    4220 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:09:24.859972    4220 client.go:171] LocalClient.Create took 306.205916ms
	I0910 14:09:26.862209    4220 start.go:128] duration metric: createHost completed in 2.360199541s
	I0910 14:09:26.862293    4220 start.go:83] releasing machines lock for "auto-322000", held for 2.360800041s
	W0910 14:09:26.862660    4220 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:26.871436    4220 out.go:177] 
	W0910 14:09:26.876516    4220 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:09:26.876554    4220 out.go:239] * 
	* 
	W0910 14:09:26.879061    4220 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:09:26.888290    4220 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.763707667s)

                                                
                                                
-- stdout --
	* [kindnet-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-322000 in cluster kindnet-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:09:29.029163    4330 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:09:29.029297    4330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:29.029300    4330 out.go:309] Setting ErrFile to fd 2...
	I0910 14:09:29.029302    4330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:29.029408    4330 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:09:29.030405    4330 out.go:303] Setting JSON to false
	I0910 14:09:29.045624    4330 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2344,"bootTime":1694377825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:09:29.045688    4330 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:09:29.050819    4330 out.go:177] * [kindnet-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:09:29.057817    4330 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:09:29.061818    4330 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:09:29.057886    4330 notify.go:220] Checking for updates...
	I0910 14:09:29.067757    4330 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:09:29.070766    4330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:09:29.073855    4330 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:09:29.076772    4330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:09:29.080077    4330 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:09:29.080117    4330 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:09:29.084801    4330 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:09:29.091729    4330 start.go:298] selected driver: qemu2
	I0910 14:09:29.091736    4330 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:09:29.091742    4330 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:09:29.093754    4330 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:09:29.096774    4330 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:09:29.098211    4330 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:09:29.098237    4330 cni.go:84] Creating CNI manager for "kindnet"
	I0910 14:09:29.098251    4330 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0910 14:09:29.098255    4330 start_flags.go:321] config:
	{Name:kindnet-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:09:29.102342    4330 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:09:29.109811    4330 out.go:177] * Starting control plane node kindnet-322000 in cluster kindnet-322000
	I0910 14:09:29.113754    4330 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:09:29.113777    4330 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:09:29.113833    4330 cache.go:57] Caching tarball of preloaded images
	I0910 14:09:29.113901    4330 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:09:29.113908    4330 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:09:29.113983    4330 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/kindnet-322000/config.json ...
	I0910 14:09:29.113994    4330 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/kindnet-322000/config.json: {Name:mkaf9f771f09ac958994ef3976e72f581a70b381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:09:29.114211    4330 start.go:365] acquiring machines lock for kindnet-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:09:29.114244    4330 start.go:369] acquired machines lock for "kindnet-322000" in 26.208µs
	I0910 14:09:29.114255    4330 start.go:93] Provisioning new machine with config: &{Name:kindnet-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:09:29.114287    4330 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:09:29.122703    4330 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:09:29.139015    4330 start.go:159] libmachine.API.Create for "kindnet-322000" (driver="qemu2")
	I0910 14:09:29.139037    4330 client.go:168] LocalClient.Create starting
	I0910 14:09:29.139099    4330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:09:29.139126    4330 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:29.139140    4330 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:29.139182    4330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:09:29.139201    4330 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:29.139215    4330 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:29.139529    4330 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:09:29.254879    4330 main.go:141] libmachine: Creating SSH key...
	I0910 14:09:29.388845    4330 main.go:141] libmachine: Creating Disk image...
	I0910 14:09:29.388853    4330 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:09:29.389044    4330 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2
	I0910 14:09:29.397710    4330 main.go:141] libmachine: STDOUT: 
	I0910 14:09:29.397731    4330 main.go:141] libmachine: STDERR: 
	I0910 14:09:29.397790    4330 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2 +20000M
	I0910 14:09:29.405073    4330 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:09:29.405086    4330 main.go:141] libmachine: STDERR: 
	I0910 14:09:29.405110    4330 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2
	I0910 14:09:29.405120    4330 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:09:29.405160    4330 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:3d:0b:6c:aa:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2
	I0910 14:09:29.406662    4330 main.go:141] libmachine: STDOUT: 
	I0910 14:09:29.406675    4330 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:09:29.406697    4330 client.go:171] LocalClient.Create took 267.65425ms
	I0910 14:09:31.408867    4330 start.go:128] duration metric: createHost completed in 2.294570375s
	I0910 14:09:31.408958    4330 start.go:83] releasing machines lock for "kindnet-322000", held for 2.294680334s
	W0910 14:09:31.409020    4330 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:31.416272    4330 out.go:177] * Deleting "kindnet-322000" in qemu2 ...
	W0910 14:09:31.440101    4330 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:31.440128    4330 start.go:687] Will try again in 5 seconds ...
	I0910 14:09:36.442422    4330 start.go:365] acquiring machines lock for kindnet-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:09:36.442960    4330 start.go:369] acquired machines lock for "kindnet-322000" in 419.25µs
	I0910 14:09:36.443112    4330 start.go:93] Provisioning new machine with config: &{Name:kindnet-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:09:36.443454    4330 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:09:36.452104    4330 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:09:36.498744    4330 start.go:159] libmachine.API.Create for "kindnet-322000" (driver="qemu2")
	I0910 14:09:36.498784    4330 client.go:168] LocalClient.Create starting
	I0910 14:09:36.498912    4330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:09:36.498969    4330 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:36.498992    4330 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:36.499072    4330 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:09:36.499106    4330 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:36.499121    4330 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:36.499627    4330 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:09:36.628739    4330 main.go:141] libmachine: Creating SSH key...
	I0910 14:09:36.702718    4330 main.go:141] libmachine: Creating Disk image...
	I0910 14:09:36.702725    4330 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:09:36.702865    4330 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2
	I0910 14:09:36.711791    4330 main.go:141] libmachine: STDOUT: 
	I0910 14:09:36.711804    4330 main.go:141] libmachine: STDERR: 
	I0910 14:09:36.711860    4330 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2 +20000M
	I0910 14:09:36.719046    4330 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:09:36.719059    4330 main.go:141] libmachine: STDERR: 
	I0910 14:09:36.719072    4330 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2
	I0910 14:09:36.719077    4330 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:09:36.719117    4330 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:0d:c3:e4:6f:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kindnet-322000/disk.qcow2
	I0910 14:09:36.720629    4330 main.go:141] libmachine: STDOUT: 
	I0910 14:09:36.720644    4330 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:09:36.720655    4330 client.go:171] LocalClient.Create took 221.865834ms
	I0910 14:09:38.722812    4330 start.go:128] duration metric: createHost completed in 2.2793405s
	I0910 14:09:38.722885    4330 start.go:83] releasing machines lock for "kindnet-322000", held for 2.279907625s
	W0910 14:09:38.723329    4330 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:38.734031    4330 out.go:177] 
	W0910 14:09:38.737997    4330 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:09:38.738114    4330 out.go:239] * 
	* 
	W0910 14:09:38.740928    4330 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:09:38.750981    4330 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0910 14:09:42.710786    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/ingress-addon-legacy-065000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.697944625s)

                                                
                                                
-- stdout --
	* [calico-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-322000 in cluster calico-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:09:40.992591    4444 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:09:40.992703    4444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:40.992707    4444 out.go:309] Setting ErrFile to fd 2...
	I0910 14:09:40.992709    4444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:40.992841    4444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:09:40.993820    4444 out.go:303] Setting JSON to false
	I0910 14:09:41.008874    4444 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2355,"bootTime":1694377825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:09:41.008949    4444 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:09:41.016477    4444 out.go:177] * [calico-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:09:41.020499    4444 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:09:41.020561    4444 notify.go:220] Checking for updates...
	I0910 14:09:41.021995    4444 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:09:41.024431    4444 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:09:41.027467    4444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:09:41.030430    4444 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:09:41.033416    4444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:09:41.036713    4444 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:09:41.036753    4444 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:09:41.041448    4444 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:09:41.048385    4444 start.go:298] selected driver: qemu2
	I0910 14:09:41.048391    4444 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:09:41.048397    4444 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:09:41.050283    4444 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:09:41.053457    4444 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:09:41.056493    4444 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:09:41.056510    4444 cni.go:84] Creating CNI manager for "calico"
	I0910 14:09:41.056514    4444 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0910 14:09:41.056520    4444 start_flags.go:321] config:
	{Name:calico-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0910 14:09:41.060656    4444 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:09:41.066293    4444 out.go:177] * Starting control plane node calico-322000 in cluster calico-322000
	I0910 14:09:41.070442    4444 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:09:41.070469    4444 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:09:41.070478    4444 cache.go:57] Caching tarball of preloaded images
	I0910 14:09:41.070560    4444 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:09:41.070567    4444 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:09:41.070632    4444 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/calico-322000/config.json ...
	I0910 14:09:41.070647    4444 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/calico-322000/config.json: {Name:mk36e701a51b7d071e9ff9c6d1942f89a521b7f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:09:41.070839    4444 start.go:365] acquiring machines lock for calico-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:09:41.070869    4444 start.go:369] acquired machines lock for "calico-322000" in 24.375µs
	I0910 14:09:41.070879    4444 start.go:93] Provisioning new machine with config: &{Name:calico-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:09:41.070945    4444 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:09:41.078395    4444 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:09:41.093634    4444 start.go:159] libmachine.API.Create for "calico-322000" (driver="qemu2")
	I0910 14:09:41.093670    4444 client.go:168] LocalClient.Create starting
	I0910 14:09:41.093731    4444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:09:41.093760    4444 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:41.093777    4444 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:41.093824    4444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:09:41.093846    4444 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:41.093852    4444 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:41.094146    4444 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:09:41.212543    4444 main.go:141] libmachine: Creating SSH key...
	I0910 14:09:41.289384    4444 main.go:141] libmachine: Creating Disk image...
	I0910 14:09:41.289392    4444 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:09:41.289539    4444 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2
	I0910 14:09:41.298205    4444 main.go:141] libmachine: STDOUT: 
	I0910 14:09:41.298219    4444 main.go:141] libmachine: STDERR: 
	I0910 14:09:41.298272    4444 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2 +20000M
	I0910 14:09:41.305423    4444 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:09:41.305446    4444 main.go:141] libmachine: STDERR: 
	I0910 14:09:41.305468    4444 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2
	I0910 14:09:41.305480    4444 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:09:41.305538    4444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:33:3f:bd:f5:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2
	I0910 14:09:41.307117    4444 main.go:141] libmachine: STDOUT: 
	I0910 14:09:41.307129    4444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:09:41.307147    4444 client.go:171] LocalClient.Create took 213.467416ms
	I0910 14:09:43.309364    4444 start.go:128] duration metric: createHost completed in 2.238356542s
	I0910 14:09:43.309433    4444 start.go:83] releasing machines lock for "calico-322000", held for 2.238556833s
	W0910 14:09:43.309484    4444 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:43.320797    4444 out.go:177] * Deleting "calico-322000" in qemu2 ...
	W0910 14:09:43.340529    4444 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:43.340560    4444 start.go:687] Will try again in 5 seconds ...
	I0910 14:09:48.342808    4444 start.go:365] acquiring machines lock for calico-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:09:48.343349    4444 start.go:369] acquired machines lock for "calico-322000" in 425.917µs
	I0910 14:09:48.343500    4444 start.go:93] Provisioning new machine with config: &{Name:calico-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:09:48.343846    4444 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:09:48.353482    4444 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:09:48.401043    4444 start.go:159] libmachine.API.Create for "calico-322000" (driver="qemu2")
	I0910 14:09:48.401091    4444 client.go:168] LocalClient.Create starting
	I0910 14:09:48.401330    4444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:09:48.401410    4444 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:48.401452    4444 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:48.401541    4444 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:09:48.401577    4444 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:48.401592    4444 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:48.402142    4444 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:09:48.530477    4444 main.go:141] libmachine: Creating SSH key...
	I0910 14:09:48.600912    4444 main.go:141] libmachine: Creating Disk image...
	I0910 14:09:48.600917    4444 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:09:48.601067    4444 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2
	I0910 14:09:48.609845    4444 main.go:141] libmachine: STDOUT: 
	I0910 14:09:48.609859    4444 main.go:141] libmachine: STDERR: 
	I0910 14:09:48.609926    4444 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2 +20000M
	I0910 14:09:48.617199    4444 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:09:48.617212    4444 main.go:141] libmachine: STDERR: 
	I0910 14:09:48.617227    4444 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2
	I0910 14:09:48.617244    4444 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:09:48.617279    4444 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:4c:9d:05:5d:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/calico-322000/disk.qcow2
	I0910 14:09:48.618887    4444 main.go:141] libmachine: STDOUT: 
	I0910 14:09:48.618902    4444 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:09:48.618922    4444 client.go:171] LocalClient.Create took 217.822166ms
	I0910 14:09:50.621069    4444 start.go:128] duration metric: createHost completed in 2.277207208s
	I0910 14:09:50.621133    4444 start.go:83] releasing machines lock for "calico-322000", held for 2.277764625s
	W0910 14:09:50.621602    4444 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:50.632104    4444 out.go:177] 
	W0910 14:09:50.636297    4444 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:09:50.636338    4444 out.go:239] * 
	* 
	W0910 14:09:50.639145    4444 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:09:50.649239    4444 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.858599042s)

                                                
                                                
-- stdout --
	* [custom-flannel-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-322000 in cluster custom-flannel-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:09:53.030259    4564 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:09:53.030378    4564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:53.030381    4564 out.go:309] Setting ErrFile to fd 2...
	I0910 14:09:53.030383    4564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:09:53.030505    4564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:09:53.031502    4564 out.go:303] Setting JSON to false
	I0910 14:09:53.046810    4564 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2368,"bootTime":1694377825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:09:53.046890    4564 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:09:53.051635    4564 out.go:177] * [custom-flannel-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:09:53.059575    4564 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:09:53.059631    4564 notify.go:220] Checking for updates...
	I0910 14:09:53.063642    4564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:09:53.066617    4564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:09:53.069574    4564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:09:53.076582    4564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:09:53.079595    4564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:09:53.082949    4564 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:09:53.082993    4564 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:09:53.087585    4564 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:09:53.094547    4564 start.go:298] selected driver: qemu2
	I0910 14:09:53.094552    4564 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:09:53.094562    4564 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:09:53.096478    4564 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:09:53.099625    4564 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:09:53.102644    4564 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:09:53.102667    4564 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0910 14:09:53.102678    4564 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0910 14:09:53.102683    4564 start_flags.go:321] config:
	{Name:custom-flannel-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:09:53.106817    4564 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:09:53.113639    4564 out.go:177] * Starting control plane node custom-flannel-322000 in cluster custom-flannel-322000
	I0910 14:09:53.117571    4564 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:09:53.117588    4564 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:09:53.117600    4564 cache.go:57] Caching tarball of preloaded images
	I0910 14:09:53.117648    4564 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:09:53.117654    4564 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:09:53.117702    4564 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/custom-flannel-322000/config.json ...
	I0910 14:09:53.117714    4564 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/custom-flannel-322000/config.json: {Name:mkf5511d90132a33d44c955046835e6c852ed5d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:09:53.117903    4564 start.go:365] acquiring machines lock for custom-flannel-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:09:53.117933    4564 start.go:369] acquired machines lock for "custom-flannel-322000" in 23.042µs
	I0910 14:09:53.117943    4564 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:09:53.117979    4564 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:09:53.125563    4564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:09:53.141099    4564 start.go:159] libmachine.API.Create for "custom-flannel-322000" (driver="qemu2")
	I0910 14:09:53.141121    4564 client.go:168] LocalClient.Create starting
	I0910 14:09:53.141178    4564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:09:53.141207    4564 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:53.141222    4564 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:53.141250    4564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:09:53.141269    4564 main.go:141] libmachine: Decoding PEM data...
	I0910 14:09:53.141281    4564 main.go:141] libmachine: Parsing certificate...
	I0910 14:09:53.141592    4564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:09:53.262402    4564 main.go:141] libmachine: Creating SSH key...
	I0910 14:09:53.477047    4564 main.go:141] libmachine: Creating Disk image...
	I0910 14:09:53.477063    4564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:09:53.477245    4564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2
	I0910 14:09:53.486279    4564 main.go:141] libmachine: STDOUT: 
	I0910 14:09:53.486291    4564 main.go:141] libmachine: STDERR: 
	I0910 14:09:53.486344    4564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2 +20000M
	I0910 14:09:53.493620    4564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:09:53.493630    4564 main.go:141] libmachine: STDERR: 
	I0910 14:09:53.493652    4564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2
	I0910 14:09:53.493659    4564 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:09:53.493691    4564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:0b:1c:8a:d9:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2
	I0910 14:09:53.495134    4564 main.go:141] libmachine: STDOUT: 
	I0910 14:09:53.495145    4564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:09:53.495163    4564 client.go:171] LocalClient.Create took 354.036375ms
	I0910 14:09:55.497326    4564 start.go:128] duration metric: createHost completed in 2.379327167s
	I0910 14:09:55.497413    4564 start.go:83] releasing machines lock for "custom-flannel-322000", held for 2.37947475s
	W0910 14:09:55.497480    4564 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:55.505892    4564 out.go:177] * Deleting "custom-flannel-322000" in qemu2 ...
	W0910 14:09:55.531134    4564 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:09:55.531168    4564 start.go:687] Will try again in 5 seconds ...
	I0910 14:10:00.533403    4564 start.go:365] acquiring machines lock for custom-flannel-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:00.533965    4564 start.go:369] acquired machines lock for "custom-flannel-322000" in 431.042µs
	I0910 14:10:00.534087    4564 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:00.534560    4564 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:00.540274    4564 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:00.585401    4564 start.go:159] libmachine.API.Create for "custom-flannel-322000" (driver="qemu2")
	I0910 14:10:00.585446    4564 client.go:168] LocalClient.Create starting
	I0910 14:10:00.585605    4564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:00.585664    4564 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:00.585680    4564 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:00.585774    4564 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:00.585810    4564 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:00.585828    4564 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:00.586370    4564 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:00.713442    4564 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:00.800816    4564 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:00.800822    4564 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:00.800958    4564 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2
	I0910 14:10:00.809486    4564 main.go:141] libmachine: STDOUT: 
	I0910 14:10:00.809499    4564 main.go:141] libmachine: STDERR: 
	I0910 14:10:00.809565    4564 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2 +20000M
	I0910 14:10:00.816862    4564 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:00.816874    4564 main.go:141] libmachine: STDERR: 
	I0910 14:10:00.816901    4564 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2
	I0910 14:10:00.816907    4564 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:00.816940    4564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:cb:60:a8:20:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/custom-flannel-322000/disk.qcow2
	I0910 14:10:00.818451    4564 main.go:141] libmachine: STDOUT: 
	I0910 14:10:00.818462    4564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:00.818474    4564 client.go:171] LocalClient.Create took 233.024167ms
	I0910 14:10:02.820628    4564 start.go:128] duration metric: createHost completed in 2.286050875s
	I0910 14:10:02.820696    4564 start.go:83] releasing machines lock for "custom-flannel-322000", held for 2.286709125s
	W0910 14:10:02.821070    4564 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:02.830697    4564 out.go:177] 
	W0910 14:10:02.835770    4564 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:10:02.835800    4564 out.go:239] * 
	* 
	W0910 14:10:02.838591    4564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:10:02.847655    4564 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.746413458s)

                                                
                                                
-- stdout --
	* [false-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-322000 in cluster false-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:10:05.202452    4685 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:10:05.202581    4685 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:05.202584    4685 out.go:309] Setting ErrFile to fd 2...
	I0910 14:10:05.202586    4685 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:05.202696    4685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:10:05.203675    4685 out.go:303] Setting JSON to false
	I0910 14:10:05.218632    4685 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2380,"bootTime":1694377825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:10:05.218718    4685 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:10:05.226094    4685 out.go:177] * [false-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:10:05.234027    4685 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:10:05.234084    4685 notify.go:220] Checking for updates...
	I0910 14:10:05.238110    4685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:10:05.241100    4685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:10:05.242530    4685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:10:05.245110    4685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:10:05.248120    4685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:10:05.251479    4685 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:10:05.251524    4685 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:10:05.255978    4685 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:10:05.263034    4685 start.go:298] selected driver: qemu2
	I0910 14:10:05.263039    4685 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:10:05.263046    4685 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:10:05.264998    4685 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:10:05.268033    4685 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:10:05.271130    4685 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:10:05.271151    4685 cni.go:84] Creating CNI manager for "false"
	I0910 14:10:05.271155    4685 start_flags.go:321] config:
	{Name:false-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:false-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0910 14:10:05.275540    4685 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:10:05.283068    4685 out.go:177] * Starting control plane node false-322000 in cluster false-322000
	I0910 14:10:05.287079    4685 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:10:05.287108    4685 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:10:05.287119    4685 cache.go:57] Caching tarball of preloaded images
	I0910 14:10:05.287200    4685 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:10:05.287207    4685 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:10:05.287270    4685 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/false-322000/config.json ...
	I0910 14:10:05.287283    4685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/false-322000/config.json: {Name:mkf58e3212fc0483f4e975420d5b87b0bf92b6d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:10:05.287507    4685 start.go:365] acquiring machines lock for false-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:05.287538    4685 start.go:369] acquired machines lock for "false-322000" in 25.208µs
	I0910 14:10:05.287550    4685 start.go:93] Provisioning new machine with config: &{Name:false-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:05.287594    4685 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:05.292004    4685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:05.308541    4685 start.go:159] libmachine.API.Create for "false-322000" (driver="qemu2")
	I0910 14:10:05.308564    4685 client.go:168] LocalClient.Create starting
	I0910 14:10:05.308624    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:05.308653    4685 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:05.308667    4685 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:05.308709    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:05.308729    4685 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:05.308739    4685 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:05.309034    4685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:05.424117    4685 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:05.476097    4685 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:05.476107    4685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:05.476242    4685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2
	I0910 14:10:05.484697    4685 main.go:141] libmachine: STDOUT: 
	I0910 14:10:05.484711    4685 main.go:141] libmachine: STDERR: 
	I0910 14:10:05.484768    4685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2 +20000M
	I0910 14:10:05.491812    4685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:05.491825    4685 main.go:141] libmachine: STDERR: 
	I0910 14:10:05.491846    4685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2
	I0910 14:10:05.491852    4685 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:05.491891    4685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:eb:43:aa:32:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2
	I0910 14:10:05.493364    4685 main.go:141] libmachine: STDOUT: 
	I0910 14:10:05.493379    4685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:05.493398    4685 client.go:171] LocalClient.Create took 184.82625ms
	I0910 14:10:07.495742    4685 start.go:128] duration metric: createHost completed in 2.208132584s
	I0910 14:10:07.495779    4685 start.go:83] releasing machines lock for "false-322000", held for 2.208236291s
	W0910 14:10:07.495851    4685 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:07.502168    4685 out.go:177] * Deleting "false-322000" in qemu2 ...
	W0910 14:10:07.523010    4685 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:07.523060    4685 start.go:687] Will try again in 5 seconds ...
	I0910 14:10:12.525264    4685 start.go:365] acquiring machines lock for false-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:12.525726    4685 start.go:369] acquired machines lock for "false-322000" in 374.459µs
	I0910 14:10:12.525857    4685 start.go:93] Provisioning new machine with config: &{Name:false-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:12.526141    4685 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:12.531848    4685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:12.576777    4685 start.go:159] libmachine.API.Create for "false-322000" (driver="qemu2")
	I0910 14:10:12.576812    4685 client.go:168] LocalClient.Create starting
	I0910 14:10:12.576921    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:12.576973    4685 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:12.576990    4685 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:12.577060    4685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:12.577094    4685 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:12.577110    4685 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:12.577576    4685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:12.704339    4685 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:12.859620    4685 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:12.859626    4685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:12.859800    4685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2
	I0910 14:10:12.868723    4685 main.go:141] libmachine: STDOUT: 
	I0910 14:10:12.868743    4685 main.go:141] libmachine: STDERR: 
	I0910 14:10:12.868821    4685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2 +20000M
	I0910 14:10:12.875997    4685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:12.876010    4685 main.go:141] libmachine: STDERR: 
	I0910 14:10:12.876034    4685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2
	I0910 14:10:12.876040    4685 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:12.876084    4685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a5:80:ee:7c:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/false-322000/disk.qcow2
	I0910 14:10:12.877601    4685 main.go:141] libmachine: STDOUT: 
	I0910 14:10:12.877614    4685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:12.877625    4685 client.go:171] LocalClient.Create took 300.802333ms
	I0910 14:10:14.879767    4685 start.go:128] duration metric: createHost completed in 2.353586625s
	I0910 14:10:14.879883    4685 start.go:83] releasing machines lock for "false-322000", held for 2.354090459s
	W0910 14:10:14.880265    4685 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:14.890774    4685 out.go:177] 
	W0910 14:10:14.894986    4685 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:10:14.895012    4685 out.go:239] * 
	* 
	W0910 14:10:14.897612    4685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:10:14.906932    4685 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.698669083s)

                                                
                                                
-- stdout --
	* [enable-default-cni-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-322000 in cluster enable-default-cni-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:10:17.094726    4795 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:10:17.094839    4795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:17.094842    4795 out.go:309] Setting ErrFile to fd 2...
	I0910 14:10:17.094845    4795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:17.094957    4795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:10:17.095944    4795 out.go:303] Setting JSON to false
	I0910 14:10:17.111058    4795 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2392,"bootTime":1694377825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:10:17.111121    4795 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:10:17.116733    4795 out.go:177] * [enable-default-cni-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:10:17.124728    4795 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:10:17.127785    4795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:10:17.124795    4795 notify.go:220] Checking for updates...
	I0910 14:10:17.133670    4795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:10:17.136703    4795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:10:17.139703    4795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:10:17.142636    4795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:10:17.145991    4795 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:10:17.146032    4795 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:10:17.150662    4795 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:10:17.157705    4795 start.go:298] selected driver: qemu2
	I0910 14:10:17.157711    4795 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:10:17.157717    4795 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:10:17.159623    4795 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:10:17.163717    4795 out.go:177] * Automatically selected the socket_vmnet network
	E0910 14:10:17.166667    4795 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0910 14:10:17.166677    4795 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:10:17.166695    4795 cni.go:84] Creating CNI manager for "bridge"
	I0910 14:10:17.166699    4795 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:10:17.166704    4795 start_flags.go:321] config:
	{Name:enable-default-cni-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:10:17.170593    4795 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:10:17.177673    4795 out.go:177] * Starting control plane node enable-default-cni-322000 in cluster enable-default-cni-322000
	I0910 14:10:17.181640    4795 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:10:17.181678    4795 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:10:17.181690    4795 cache.go:57] Caching tarball of preloaded images
	I0910 14:10:17.181742    4795 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:10:17.181746    4795 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:10:17.181798    4795 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/enable-default-cni-322000/config.json ...
	I0910 14:10:17.181812    4795 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/enable-default-cni-322000/config.json: {Name:mk86449ef0f0e88bc0f35d7c56d558ae93d94da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:10:17.182021    4795 start.go:365] acquiring machines lock for enable-default-cni-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:17.182050    4795 start.go:369] acquired machines lock for "enable-default-cni-322000" in 22µs
	I0910 14:10:17.182060    4795 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:17.182088    4795 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:17.186695    4795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:17.201815    4795 start.go:159] libmachine.API.Create for "enable-default-cni-322000" (driver="qemu2")
	I0910 14:10:17.201847    4795 client.go:168] LocalClient.Create starting
	I0910 14:10:17.201912    4795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:17.201937    4795 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:17.201946    4795 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:17.201990    4795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:17.202013    4795 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:17.202020    4795 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:17.202447    4795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:17.319626    4795 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:17.353833    4795 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:17.353838    4795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:17.353984    4795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2
	I0910 14:10:17.362414    4795 main.go:141] libmachine: STDOUT: 
	I0910 14:10:17.362428    4795 main.go:141] libmachine: STDERR: 
	I0910 14:10:17.362478    4795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2 +20000M
	I0910 14:10:17.369655    4795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:17.369679    4795 main.go:141] libmachine: STDERR: 
	I0910 14:10:17.369705    4795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2
	I0910 14:10:17.369716    4795 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:17.369753    4795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:da:89:4a:42:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2
	I0910 14:10:17.371294    4795 main.go:141] libmachine: STDOUT: 
	I0910 14:10:17.371307    4795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:17.371326    4795 client.go:171] LocalClient.Create took 169.473375ms
	I0910 14:10:19.373481    4795 start.go:128] duration metric: createHost completed in 2.191381958s
	I0910 14:10:19.373550    4795 start.go:83] releasing machines lock for "enable-default-cni-322000", held for 2.191495083s
	W0910 14:10:19.373869    4795 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:19.382209    4795 out.go:177] * Deleting "enable-default-cni-322000" in qemu2 ...
	W0910 14:10:19.404858    4795 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:19.404888    4795 start.go:687] Will try again in 5 seconds ...
	I0910 14:10:24.407091    4795 start.go:365] acquiring machines lock for enable-default-cni-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:24.407608    4795 start.go:369] acquired machines lock for "enable-default-cni-322000" in 383.334µs
	I0910 14:10:24.407722    4795 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:24.408057    4795 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:24.412833    4795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:24.457598    4795 start.go:159] libmachine.API.Create for "enable-default-cni-322000" (driver="qemu2")
	I0910 14:10:24.457651    4795 client.go:168] LocalClient.Create starting
	I0910 14:10:24.457755    4795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:24.457803    4795 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:24.457817    4795 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:24.457886    4795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:24.457921    4795 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:24.457935    4795 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:24.458426    4795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:24.589280    4795 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:24.705711    4795 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:24.705716    4795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:24.705850    4795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2
	I0910 14:10:24.714272    4795 main.go:141] libmachine: STDOUT: 
	I0910 14:10:24.714289    4795 main.go:141] libmachine: STDERR: 
	I0910 14:10:24.714389    4795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2 +20000M
	I0910 14:10:24.721558    4795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:24.721570    4795 main.go:141] libmachine: STDERR: 
	I0910 14:10:24.721591    4795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2
	I0910 14:10:24.721599    4795 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:24.721634    4795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:35:46:12:d3:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/enable-default-cni-322000/disk.qcow2
	I0910 14:10:24.723115    4795 main.go:141] libmachine: STDOUT: 
	I0910 14:10:24.723129    4795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:24.723149    4795 client.go:171] LocalClient.Create took 265.494625ms
	I0910 14:10:26.725340    4795 start.go:128] duration metric: createHost completed in 2.317229417s
	I0910 14:10:26.725440    4795 start.go:83] releasing machines lock for "enable-default-cni-322000", held for 2.317811708s
	W0910 14:10:26.726198    4795 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:26.736656    4795 out.go:177] 
	W0910 14:10:26.740843    4795 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:10:26.740879    4795 out.go:239] * 
	* 
	W0910 14:10:26.743543    4795 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:10:26.752811    4795 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.829863041s)

                                                
                                                
-- stdout --
	* [flannel-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-322000 in cluster flannel-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:10:28.945560    4905 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:10:28.945746    4905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:28.945752    4905 out.go:309] Setting ErrFile to fd 2...
	I0910 14:10:28.945755    4905 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:28.945880    4905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:10:28.946878    4905 out.go:303] Setting JSON to false
	I0910 14:10:28.962098    4905 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2403,"bootTime":1694377825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:10:28.962168    4905 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:10:28.966974    4905 out.go:177] * [flannel-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:10:28.970830    4905 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:10:28.973884    4905 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:10:28.970900    4905 notify.go:220] Checking for updates...
	I0910 14:10:28.980876    4905 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:10:28.983872    4905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:10:28.986880    4905 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:10:28.989877    4905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:10:28.993379    4905 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:10:28.993444    4905 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:10:28.996902    4905 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:10:29.003827    4905 start.go:298] selected driver: qemu2
	I0910 14:10:29.003832    4905 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:10:29.003838    4905 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:10:29.005718    4905 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:10:29.008917    4905 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:10:29.012844    4905 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:10:29.012863    4905 cni.go:84] Creating CNI manager for "flannel"
	I0910 14:10:29.012867    4905 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0910 14:10:29.012874    4905 start_flags.go:321] config:
	{Name:flannel-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:flannel-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:10:29.016905    4905 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:10:29.024805    4905 out.go:177] * Starting control plane node flannel-322000 in cluster flannel-322000
	I0910 14:10:29.028840    4905 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:10:29.028860    4905 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:10:29.028876    4905 cache.go:57] Caching tarball of preloaded images
	I0910 14:10:29.028935    4905 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:10:29.028941    4905 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:10:29.029020    4905 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/flannel-322000/config.json ...
	I0910 14:10:29.029035    4905 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/flannel-322000/config.json: {Name:mkf9749ba2335ac0f229dd248cffcb74af8218ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:10:29.029236    4905 start.go:365] acquiring machines lock for flannel-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:29.029265    4905 start.go:369] acquired machines lock for "flannel-322000" in 23.542µs
	I0910 14:10:29.029276    4905 start.go:93] Provisioning new machine with config: &{Name:flannel-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:29.029307    4905 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:29.037898    4905 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:29.053086    4905 start.go:159] libmachine.API.Create for "flannel-322000" (driver="qemu2")
	I0910 14:10:29.053111    4905 client.go:168] LocalClient.Create starting
	I0910 14:10:29.053166    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:29.053190    4905 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:29.053203    4905 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:29.053239    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:29.053256    4905 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:29.053265    4905 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:29.053595    4905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:29.169334    4905 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:29.301843    4905 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:29.301851    4905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:29.301996    4905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2
	I0910 14:10:29.310763    4905 main.go:141] libmachine: STDOUT: 
	I0910 14:10:29.310776    4905 main.go:141] libmachine: STDERR: 
	I0910 14:10:29.310821    4905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2 +20000M
	I0910 14:10:29.318004    4905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:29.318016    4905 main.go:141] libmachine: STDERR: 
	I0910 14:10:29.318029    4905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2
	I0910 14:10:29.318035    4905 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:29.318070    4905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:a7:a4:d5:ba:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2
	I0910 14:10:29.319606    4905 main.go:141] libmachine: STDOUT: 
	I0910 14:10:29.319629    4905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:29.319652    4905 client.go:171] LocalClient.Create took 266.535334ms
	I0910 14:10:31.321798    4905 start.go:128] duration metric: createHost completed in 2.292475083s
	I0910 14:10:31.321907    4905 start.go:83] releasing machines lock for "flannel-322000", held for 2.292602208s
	W0910 14:10:31.321959    4905 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:31.329360    4905 out.go:177] * Deleting "flannel-322000" in qemu2 ...
	W0910 14:10:31.351064    4905 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:31.351090    4905 start.go:687] Will try again in 5 seconds ...
	I0910 14:10:36.353377    4905 start.go:365] acquiring machines lock for flannel-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:36.353863    4905 start.go:369] acquired machines lock for "flannel-322000" in 400.5µs
	I0910 14:10:36.354024    4905 start.go:93] Provisioning new machine with config: &{Name:flannel-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:36.354332    4905 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:36.363071    4905 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:36.412644    4905 start.go:159] libmachine.API.Create for "flannel-322000" (driver="qemu2")
	I0910 14:10:36.412687    4905 client.go:168] LocalClient.Create starting
	I0910 14:10:36.412832    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:36.412892    4905 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:36.412911    4905 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:36.412977    4905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:36.413014    4905 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:36.413028    4905 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:36.413532    4905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:36.540898    4905 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:36.685674    4905 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:36.685680    4905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:36.685846    4905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2
	I0910 14:10:36.694701    4905 main.go:141] libmachine: STDOUT: 
	I0910 14:10:36.694714    4905 main.go:141] libmachine: STDERR: 
	I0910 14:10:36.694779    4905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2 +20000M
	I0910 14:10:36.701886    4905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:36.701897    4905 main.go:141] libmachine: STDERR: 
	I0910 14:10:36.701909    4905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2
	I0910 14:10:36.701914    4905 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:36.701961    4905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:ac:dd:24:ae:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/flannel-322000/disk.qcow2
	I0910 14:10:36.703395    4905 main.go:141] libmachine: STDOUT: 
	I0910 14:10:36.703408    4905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:36.703419    4905 client.go:171] LocalClient.Create took 290.727958ms
	I0910 14:10:38.705573    4905 start.go:128] duration metric: createHost completed in 2.351219792s
	I0910 14:10:38.705646    4905 start.go:83] releasing machines lock for "flannel-322000", held for 2.351767791s
	W0910 14:10:38.706012    4905 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:38.716583    4905 out.go:177] 
	W0910 14:10:38.720624    4905 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:10:38.720646    4905 out.go:239] * 
	* 
	W0910 14:10:38.723522    4905 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:10:38.733578    4905 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.662735542s)

                                                
                                                
-- stdout --
	* [bridge-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-322000 in cluster bridge-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:10:41.149586    5023 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:10:41.149694    5023 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:41.149697    5023 out.go:309] Setting ErrFile to fd 2...
	I0910 14:10:41.149699    5023 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:41.149830    5023 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:10:41.150935    5023 out.go:303] Setting JSON to false
	I0910 14:10:41.166321    5023 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2416,"bootTime":1694377825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:10:41.166378    5023 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:10:41.171998    5023 out.go:177] * [bridge-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:10:41.179916    5023 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:10:41.179969    5023 notify.go:220] Checking for updates...
	I0910 14:10:41.186919    5023 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:10:41.190012    5023 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:10:41.193023    5023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:10:41.194449    5023 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:10:41.197979    5023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:10:41.201330    5023 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:10:41.201375    5023 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:10:41.204840    5023 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:10:41.211973    5023 start.go:298] selected driver: qemu2
	I0910 14:10:41.211981    5023 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:10:41.211988    5023 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:10:41.213947    5023 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:10:41.217836    5023 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:10:41.221061    5023 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:10:41.221101    5023 cni.go:84] Creating CNI manager for "bridge"
	I0910 14:10:41.221109    5023 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:10:41.221116    5023 start_flags.go:321] config:
	{Name:bridge-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0910 14:10:41.225096    5023 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:10:41.229054    5023 out.go:177] * Starting control plane node bridge-322000 in cluster bridge-322000
	I0910 14:10:41.236980    5023 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:10:41.237006    5023 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:10:41.237019    5023 cache.go:57] Caching tarball of preloaded images
	I0910 14:10:41.237069    5023 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:10:41.237074    5023 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:10:41.237132    5023 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/bridge-322000/config.json ...
	I0910 14:10:41.237143    5023 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/bridge-322000/config.json: {Name:mk050f1a862f6c67ffa41aa2598796dd1a52ddbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:10:41.237355    5023 start.go:365] acquiring machines lock for bridge-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:41.237383    5023 start.go:369] acquired machines lock for "bridge-322000" in 23.375µs
	I0910 14:10:41.237395    5023 start.go:93] Provisioning new machine with config: &{Name:bridge-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:41.237418    5023 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:41.245932    5023 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:41.261022    5023 start.go:159] libmachine.API.Create for "bridge-322000" (driver="qemu2")
	I0910 14:10:41.261042    5023 client.go:168] LocalClient.Create starting
	I0910 14:10:41.261100    5023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:41.261130    5023 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:41.261143    5023 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:41.261186    5023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:41.261205    5023 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:41.261220    5023 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:41.261826    5023 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:41.379087    5023 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:41.431367    5023 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:41.431372    5023 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:41.431511    5023 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2
	I0910 14:10:41.439906    5023 main.go:141] libmachine: STDOUT: 
	I0910 14:10:41.439924    5023 main.go:141] libmachine: STDERR: 
	I0910 14:10:41.439978    5023 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2 +20000M
	I0910 14:10:41.447097    5023 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:41.447119    5023 main.go:141] libmachine: STDERR: 
	I0910 14:10:41.447147    5023 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2
	I0910 14:10:41.447153    5023 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:41.447196    5023 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:dd:f0:14:ab:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2
	I0910 14:10:41.448723    5023 main.go:141] libmachine: STDOUT: 
	I0910 14:10:41.448734    5023 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:41.448756    5023 client.go:171] LocalClient.Create took 187.708333ms
	I0910 14:10:43.450922    5023 start.go:128] duration metric: createHost completed in 2.213492667s
	I0910 14:10:43.451038    5023 start.go:83] releasing machines lock for "bridge-322000", held for 2.213592458s
	W0910 14:10:43.451098    5023 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:43.459666    5023 out.go:177] * Deleting "bridge-322000" in qemu2 ...
	W0910 14:10:43.483430    5023 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:43.483459    5023 start.go:687] Will try again in 5 seconds ...
	I0910 14:10:48.485648    5023 start.go:365] acquiring machines lock for bridge-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:48.485966    5023 start.go:369] acquired machines lock for "bridge-322000" in 215.791µs
	I0910 14:10:48.486048    5023 start.go:93] Provisioning new machine with config: &{Name:bridge-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:48.486422    5023 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:48.495505    5023 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:48.543017    5023 start.go:159] libmachine.API.Create for "bridge-322000" (driver="qemu2")
	I0910 14:10:48.543051    5023 client.go:168] LocalClient.Create starting
	I0910 14:10:48.543163    5023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:48.543227    5023 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:48.543248    5023 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:48.543317    5023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:48.543353    5023 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:48.543378    5023 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:48.543870    5023 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:48.677015    5023 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:48.725057    5023 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:48.725062    5023 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:48.725191    5023 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2
	I0910 14:10:48.733506    5023 main.go:141] libmachine: STDOUT: 
	I0910 14:10:48.733522    5023 main.go:141] libmachine: STDERR: 
	I0910 14:10:48.733568    5023 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2 +20000M
	I0910 14:10:48.740657    5023 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:48.740670    5023 main.go:141] libmachine: STDERR: 
	I0910 14:10:48.740685    5023 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2
	I0910 14:10:48.740691    5023 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:48.740733    5023 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c6:fe:42:94:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/bridge-322000/disk.qcow2
	I0910 14:10:48.742187    5023 main.go:141] libmachine: STDOUT: 
	I0910 14:10:48.742203    5023 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:48.742215    5023 client.go:171] LocalClient.Create took 199.160083ms
	I0910 14:10:50.744449    5023 start.go:128] duration metric: createHost completed in 2.257964958s
	I0910 14:10:50.744527    5023 start.go:83] releasing machines lock for "bridge-322000", held for 2.258551625s
	W0910 14:10:50.745004    5023 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:50.755684    5023 out.go:177] 
	W0910 14:10:50.758770    5023 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:10:50.758840    5023 out.go:239] * 
	* 
	W0910 14:10:50.761535    5023 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:10:50.770676    5023 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-322000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.745223s)

                                                
                                                
-- stdout --
	* [kubenet-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-322000 in cluster kubenet-322000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:10:52.952804    5136 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:10:52.952918    5136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:52.952921    5136 out.go:309] Setting ErrFile to fd 2...
	I0910 14:10:52.952924    5136 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:10:52.953041    5136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:10:52.954005    5136 out.go:303] Setting JSON to false
	I0910 14:10:52.969348    5136 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2427,"bootTime":1694377825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:10:52.969409    5136 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:10:52.973414    5136 out.go:177] * [kubenet-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:10:52.981397    5136 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:10:52.981494    5136 notify.go:220] Checking for updates...
	I0910 14:10:52.985348    5136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:10:52.989333    5136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:10:52.992420    5136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:10:52.995362    5136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:10:52.998397    5136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:10:53.001618    5136 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:10:53.001659    5136 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:10:53.006392    5136 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:10:53.013257    5136 start.go:298] selected driver: qemu2
	I0910 14:10:53.013269    5136 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:10:53.013276    5136 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:10:53.015205    5136 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:10:53.018375    5136 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:10:53.021430    5136 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:10:53.021448    5136 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0910 14:10:53.021452    5136 start_flags.go:321] config:
	{Name:kubenet-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0910 14:10:53.025519    5136 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:10:53.032390    5136 out.go:177] * Starting control plane node kubenet-322000 in cluster kubenet-322000
	I0910 14:10:53.036270    5136 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:10:53.036295    5136 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:10:53.036316    5136 cache.go:57] Caching tarball of preloaded images
	I0910 14:10:53.036376    5136 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:10:53.036382    5136 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:10:53.036462    5136 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/kubenet-322000/config.json ...
	I0910 14:10:53.036473    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/kubenet-322000/config.json: {Name:mk9a55de5bf2dd7a6dda470f77b3133571f383d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:10:53.036691    5136 start.go:365] acquiring machines lock for kubenet-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:10:53.036720    5136 start.go:369] acquired machines lock for "kubenet-322000" in 23.667µs
	I0910 14:10:53.036732    5136 start.go:93] Provisioning new machine with config: &{Name:kubenet-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:10:53.036760    5136 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:10:53.045352    5136 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:10:53.061108    5136 start.go:159] libmachine.API.Create for "kubenet-322000" (driver="qemu2")
	I0910 14:10:53.061127    5136 client.go:168] LocalClient.Create starting
	I0910 14:10:53.061176    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:10:53.061204    5136 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:53.061213    5136 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:53.061256    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:10:53.061273    5136 main.go:141] libmachine: Decoding PEM data...
	I0910 14:10:53.061280    5136 main.go:141] libmachine: Parsing certificate...
	I0910 14:10:53.061588    5136 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:10:53.186417    5136 main.go:141] libmachine: Creating SSH key...
	I0910 14:10:53.320508    5136 main.go:141] libmachine: Creating Disk image...
	I0910 14:10:53.320515    5136 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:10:53.320658    5136 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2
	I0910 14:10:53.329519    5136 main.go:141] libmachine: STDOUT: 
	I0910 14:10:53.329532    5136 main.go:141] libmachine: STDERR: 
	I0910 14:10:53.329577    5136 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2 +20000M
	I0910 14:10:53.336665    5136 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:10:53.336677    5136 main.go:141] libmachine: STDERR: 
	I0910 14:10:53.336691    5136 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2
	I0910 14:10:53.336699    5136 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:10:53.336743    5136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:1a:e6:58:51:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2
	I0910 14:10:53.338243    5136 main.go:141] libmachine: STDOUT: 
	I0910 14:10:53.338254    5136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:10:53.338275    5136 client.go:171] LocalClient.Create took 277.141125ms
	I0910 14:10:55.340445    5136 start.go:128] duration metric: createHost completed in 2.303674625s
	I0910 14:10:55.340539    5136 start.go:83] releasing machines lock for "kubenet-322000", held for 2.303775583s
	W0910 14:10:55.340600    5136 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:55.351848    5136 out.go:177] * Deleting "kubenet-322000" in qemu2 ...
	W0910 14:10:55.371497    5136 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:10:55.371532    5136 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:00.373832    5136 start.go:365] acquiring machines lock for kubenet-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:00.374410    5136 start.go:369] acquired machines lock for "kubenet-322000" in 454.417µs
	I0910 14:11:00.374541    5136 start.go:93] Provisioning new machine with config: &{Name:kubenet-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:00.374841    5136 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:00.384368    5136 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0910 14:11:00.433862    5136 start.go:159] libmachine.API.Create for "kubenet-322000" (driver="qemu2")
	I0910 14:11:00.433906    5136 client.go:168] LocalClient.Create starting
	I0910 14:11:00.434085    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:00.434139    5136 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:00.434159    5136 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:00.434231    5136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:00.434274    5136 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:00.434293    5136 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:00.434795    5136 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:00.563702    5136 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:00.612633    5136 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:00.612638    5136 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:00.612784    5136 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2
	I0910 14:11:00.621313    5136 main.go:141] libmachine: STDOUT: 
	I0910 14:11:00.621326    5136 main.go:141] libmachine: STDERR: 
	I0910 14:11:00.621373    5136 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2 +20000M
	I0910 14:11:00.628464    5136 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:00.628478    5136 main.go:141] libmachine: STDERR: 
	I0910 14:11:00.628493    5136 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2
	I0910 14:11:00.628499    5136 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:00.628539    5136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:e7:b1:80:43:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/kubenet-322000/disk.qcow2
	I0910 14:11:00.630048    5136 main.go:141] libmachine: STDOUT: 
	I0910 14:11:00.630064    5136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:00.630076    5136 client.go:171] LocalClient.Create took 196.163625ms
	I0910 14:11:02.632236    5136 start.go:128] duration metric: createHost completed in 2.257376583s
	I0910 14:11:02.632302    5136 start.go:83] releasing machines lock for "kubenet-322000", held for 2.257874s
	W0910 14:11:02.632729    5136 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:02.642447    5136 out.go:177] 
	W0910 14:11:02.647382    5136 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:02.647432    5136 out.go:239] * 
	* 
	W0910 14:11:02.649898    5136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:02.657383    5136 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe start -p stopped-upgrade-758000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe start -p stopped-upgrade-758000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe: permission denied (1.829958ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe start -p stopped-upgrade-758000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe start -p stopped-upgrade-758000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe: permission denied (2.010459ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe start -p stopped-upgrade-758000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe start -p stopped-upgrade-758000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe: permission denied (1.91275ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.3337612707.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-409000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-409000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.929823083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-409000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-409000 in cluster old-k8s-version-409000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-409000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:04.838473    5250 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:04.838593    5250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:04.838596    5250 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:04.838599    5250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:04.838713    5250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:04.839739    5250 out.go:303] Setting JSON to false
	I0910 14:11:04.854862    5250 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2439,"bootTime":1694377825,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:04.854933    5250 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:04.858746    5250 out.go:177] * [old-k8s-version-409000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:04.866772    5250 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:04.870644    5250 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:04.866821    5250 notify.go:220] Checking for updates...
	I0910 14:11:04.873719    5250 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:04.876754    5250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:04.879657    5250 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:04.882718    5250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:04.886088    5250 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:04.886135    5250 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:04.890752    5250 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:11:04.897761    5250 start.go:298] selected driver: qemu2
	I0910 14:11:04.897771    5250 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:11:04.897777    5250 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:04.899703    5250 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:11:04.902706    5250 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:11:04.905857    5250 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:04.905884    5250 cni.go:84] Creating CNI manager for ""
	I0910 14:11:04.905890    5250 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 14:11:04.905894    5250 start_flags.go:321] config:
	{Name:old-k8s-version-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:04.910077    5250 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:04.917608    5250 out.go:177] * Starting control plane node old-k8s-version-409000 in cluster old-k8s-version-409000
	I0910 14:11:04.921754    5250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 14:11:04.921785    5250 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 14:11:04.921801    5250 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:04.921882    5250 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:04.921888    5250 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0910 14:11:04.921953    5250 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/old-k8s-version-409000/config.json ...
	I0910 14:11:04.921966    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/old-k8s-version-409000/config.json: {Name:mk9f09e1cee5e745fd175064cae805fe3cf22b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:11:04.922169    5250 start.go:365] acquiring machines lock for old-k8s-version-409000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:04.922202    5250 start.go:369] acquired machines lock for "old-k8s-version-409000" in 23.584µs
	I0910 14:11:04.922213    5250 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:04.922247    5250 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:04.930673    5250 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:04.946801    5250 start.go:159] libmachine.API.Create for "old-k8s-version-409000" (driver="qemu2")
	I0910 14:11:04.946826    5250 client.go:168] LocalClient.Create starting
	I0910 14:11:04.946888    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:04.946916    5250 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:04.946931    5250 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:04.946973    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:04.946990    5250 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:04.946997    5250 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:04.947322    5250 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:05.063265    5250 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:05.314365    5250 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:05.314379    5250 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:05.314540    5250 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:05.323664    5250 main.go:141] libmachine: STDOUT: 
	I0910 14:11:05.323681    5250 main.go:141] libmachine: STDERR: 
	I0910 14:11:05.323748    5250 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2 +20000M
	I0910 14:11:05.332130    5250 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:05.332146    5250 main.go:141] libmachine: STDERR: 
	I0910 14:11:05.332169    5250 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:05.332176    5250 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:05.332216    5250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:43:9b:45:60:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:05.333974    5250 main.go:141] libmachine: STDOUT: 
	I0910 14:11:05.333986    5250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:05.334007    5250 client.go:171] LocalClient.Create took 387.174125ms
	I0910 14:11:07.336173    5250 start.go:128] duration metric: createHost completed in 2.413910417s
	I0910 14:11:07.336254    5250 start.go:83] releasing machines lock for "old-k8s-version-409000", held for 2.41404725s
	W0910 14:11:07.336379    5250 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:07.355928    5250 out.go:177] * Deleting "old-k8s-version-409000" in qemu2 ...
	W0910 14:11:07.372635    5250 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:07.372722    5250 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:12.374419    5250 start.go:365] acquiring machines lock for old-k8s-version-409000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:12.374925    5250 start.go:369] acquired machines lock for "old-k8s-version-409000" in 418.417µs
	I0910 14:11:12.375091    5250 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:12.375447    5250 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:12.385036    5250 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:12.432090    5250 start.go:159] libmachine.API.Create for "old-k8s-version-409000" (driver="qemu2")
	I0910 14:11:12.432137    5250 client.go:168] LocalClient.Create starting
	I0910 14:11:12.432265    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:12.432341    5250 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:12.432363    5250 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:12.432443    5250 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:12.432499    5250 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:12.432516    5250 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:12.433054    5250 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:12.561690    5250 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:12.674400    5250 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:12.674409    5250 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:12.674543    5250 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:12.683149    5250 main.go:141] libmachine: STDOUT: 
	I0910 14:11:12.683166    5250 main.go:141] libmachine: STDERR: 
	I0910 14:11:12.683232    5250 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2 +20000M
	I0910 14:11:12.690529    5250 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:12.690542    5250 main.go:141] libmachine: STDERR: 
	I0910 14:11:12.690558    5250 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:12.690564    5250 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:12.690610    5250 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:fe:72:0d:76:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:12.692172    5250 main.go:141] libmachine: STDOUT: 
	I0910 14:11:12.692189    5250 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:12.692202    5250 client.go:171] LocalClient.Create took 260.054209ms
	I0910 14:11:14.694350    5250 start.go:128] duration metric: createHost completed in 2.318885541s
	I0910 14:11:14.694418    5250 start.go:83] releasing machines lock for "old-k8s-version-409000", held for 2.319472208s
	W0910 14:11:14.694810    5250 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:14.714478    5250 out.go:177] 
	W0910 14:11:14.719513    5250 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:14.719545    5250 out.go:239] * 
	* 
	W0910 14:11:14.721526    5250 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:14.731422    5250 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-409000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (49.423416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-758000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-758000: exit status 85 (79.429041ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo cat                            | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo cat                            | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo cat                            | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo docker                         | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo cat                            | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo cat                            | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo cat                            | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo cat                            | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo                                | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo find                           | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-322000 sudo crio                           | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-322000                                     | bridge-322000          | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT | 10 Sep 23 14:10 PDT |
	| start   | -p kubenet-322000                                    | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:10 PDT |                     |
	|         | --memory=3072                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/hosts                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/resolv.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo crictl                        | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | pods                                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo crictl                        | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | ps --all                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo find                          | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo ip a s                        | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	| ssh     | -p kubenet-322000 sudo ip r s                        | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | iptables-save                                        |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | iptables -t nat -L -n -v                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo docker                        | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo cat                           | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo                               | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo find                          | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-322000 sudo crio                          | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p kubenet-322000                                    | kubenet-322000         | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT | 10 Sep 23 14:11 PDT |
	| start   | -p old-k8s-version-409000                            | old-k8s-version-409000 | jenkins | v1.31.2 | 10 Sep 23 14:11 PDT |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/10 14:11:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 14:11:04.838473    5250 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:04.838593    5250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:04.838596    5250 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:04.838599    5250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:04.838713    5250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:04.839739    5250 out.go:303] Setting JSON to false
	I0910 14:11:04.854862    5250 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2439,"bootTime":1694377825,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:04.854933    5250 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:04.858746    5250 out.go:177] * [old-k8s-version-409000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:04.866772    5250 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:04.870644    5250 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:04.866821    5250 notify.go:220] Checking for updates...
	I0910 14:11:04.873719    5250 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:04.876754    5250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:04.879657    5250 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:04.882718    5250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:04.886088    5250 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:04.886135    5250 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:04.890752    5250 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:11:04.897761    5250 start.go:298] selected driver: qemu2
	I0910 14:11:04.897771    5250 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:11:04.897777    5250 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:04.899703    5250 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:11:04.902706    5250 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:11:04.905857    5250 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:04.905884    5250 cni.go:84] Creating CNI manager for ""
	I0910 14:11:04.905890    5250 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 14:11:04.905894    5250 start_flags.go:321] config:
	{Name:old-k8s-version-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:04.910077    5250 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:04.917608    5250 out.go:177] * Starting control plane node old-k8s-version-409000 in cluster old-k8s-version-409000
	I0910 14:11:04.921754    5250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 14:11:04.921785    5250 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 14:11:04.921801    5250 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:04.921882    5250 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:04.921888    5250 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0910 14:11:04.921953    5250 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/old-k8s-version-409000/config.json ...
	I0910 14:11:04.921966    5250 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/old-k8s-version-409000/config.json: {Name:mk9f09e1cee5e745fd175064cae805fe3cf22b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:11:04.922169    5250 start.go:365] acquiring machines lock for old-k8s-version-409000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:04.922202    5250 start.go:369] acquired machines lock for "old-k8s-version-409000" in 23.584µs
	I0910 14:11:04.922213    5250 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:04.922247    5250 start.go:125] createHost starting for "" (driver="qemu2")
	
	* 
	* Profile "stopped-upgrade-758000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-758000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
E0910 14:11:12.901478    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.572501292s)

                                                
                                                
-- stdout --
	* [no-preload-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-322000 in cluster no-preload-322000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-322000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:05.461419    5277 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:05.461518    5277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:05.461521    5277 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:05.461524    5277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:05.461633    5277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:05.462650    5277 out.go:303] Setting JSON to false
	I0910 14:11:05.477510    5277 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2440,"bootTime":1694377825,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:05.477581    5277 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:05.480677    5277 out.go:177] * [no-preload-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:05.487600    5277 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:05.487668    5277 notify.go:220] Checking for updates...
	I0910 14:11:05.491522    5277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:05.494508    5277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:05.497541    5277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:05.500586    5277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:05.503474    5277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:05.506841    5277 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:05.506904    5277 config.go:182] Loaded profile config "old-k8s-version-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0910 14:11:05.506946    5277 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:05.511617    5277 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:11:05.518548    5277 start.go:298] selected driver: qemu2
	I0910 14:11:05.518556    5277 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:11:05.518563    5277 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:05.520449    5277 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:11:05.523615    5277 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:11:05.527595    5277 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:05.527617    5277 cni.go:84] Creating CNI manager for ""
	I0910 14:11:05.527624    5277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:05.527629    5277 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:11:05.527635    5277 start_flags.go:321] config:
	{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:05.531545    5277 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.538414    5277 out.go:177] * Starting control plane node no-preload-322000 in cluster no-preload-322000
	I0910 14:11:05.542525    5277 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:05.542638    5277 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/no-preload-322000/config.json ...
	I0910 14:11:05.542668    5277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/no-preload-322000/config.json: {Name:mk874363a192e201949bae3535ad1b323c97fc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:11:05.542676    5277 cache.go:107] acquiring lock: {Name:mk54fafb2c8726195c146a28b31a05730133ba38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542685    5277 cache.go:107] acquiring lock: {Name:mkc090077c9b459dfa2fbdfe347de260b7afcd35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542708    5277 cache.go:107] acquiring lock: {Name:mk9696d06492c8b8df03da27dc962c04433876be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542747    5277 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0910 14:11:05.542754    5277 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80µs
	I0910 14:11:05.542764    5277 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0910 14:11:05.542770    5277 cache.go:107] acquiring lock: {Name:mkd7df137ed915a3abcc8bcb655ec54fa854ba2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542695    5277 cache.go:107] acquiring lock: {Name:mk543f243b9df83639e4afe71b2c876000b6956e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542809    5277 cache.go:107] acquiring lock: {Name:mk338e718a434c07a2a609df243df74296b40193 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542821    5277 cache.go:107] acquiring lock: {Name:mk011a162f1decfd7c0d2046a504a03cbd0c4977 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542871    5277 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0910 14:11:05.542938    5277 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0910 14:11:05.542903    5277 cache.go:107] acquiring lock: {Name:mk4231d1216272659720fd4f4c8c19fe0a02bfaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:05.542993    5277 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0910 14:11:05.543012    5277 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0910 14:11:05.543031    5277 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0910 14:11:05.543058    5277 start.go:365] acquiring machines lock for no-preload-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:05.543114    5277 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0910 14:11:05.543144    5277 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0910 14:11:05.550283    5277 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0910 14:11:05.550308    5277 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0910 14:11:05.551012    5277 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0910 14:11:05.551407    5277 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0910 14:11:05.552888    5277 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0910 14:11:05.552929    5277 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0910 14:11:05.552999    5277 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0910 14:11:06.141864    5277 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0910 14:11:06.187036    5277 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0910 14:11:06.308555    5277 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0910 14:11:06.308575    5277 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 765.895083ms
	I0910 14:11:06.308587    5277 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0910 14:11:06.399821    5277 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1
	I0910 14:11:06.549708    5277 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1
	I0910 14:11:06.760804    5277 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0910 14:11:06.964062    5277 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1
	I0910 14:11:07.195760    5277 cache.go:162] opening:  /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0910 14:11:07.336436    5277 start.go:369] acquired machines lock for "no-preload-322000" in 1.793362125s
	I0910 14:11:07.336567    5277 start.go:93] Provisioning new machine with config: &{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:07.336816    5277 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:07.348410    5277 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:07.397056    5277 start.go:159] libmachine.API.Create for "no-preload-322000" (driver="qemu2")
	I0910 14:11:07.397100    5277 client.go:168] LocalClient.Create starting
	I0910 14:11:07.397210    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:07.397261    5277 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:07.397285    5277 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:07.397377    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:07.397418    5277 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:07.397432    5277 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:07.398102    5277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:07.526920    5277 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:07.607623    5277 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:07.607632    5277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:07.607779    5277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:07.616775    5277 main.go:141] libmachine: STDOUT: 
	I0910 14:11:07.616788    5277 main.go:141] libmachine: STDERR: 
	I0910 14:11:07.616832    5277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2 +20000M
	I0910 14:11:07.624087    5277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:07.624100    5277 main.go:141] libmachine: STDERR: 
	I0910 14:11:07.624117    5277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:07.624125    5277 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:07.624165    5277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:56:5c:26:59:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:07.625720    5277 main.go:141] libmachine: STDOUT: 
	I0910 14:11:07.625734    5277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:07.625754    5277 client.go:171] LocalClient.Create took 228.646542ms
	I0910 14:11:08.839157    5277 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0910 14:11:08.839232    5277 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.296463209s
	I0910 14:11:08.839291    5277 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0910 14:11:09.197086    5277 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0910 14:11:09.197158    5277 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 3.65439925s
	I0910 14:11:09.197212    5277 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0910 14:11:09.625989    5277 start.go:128] duration metric: createHost completed in 2.289146833s
	I0910 14:11:09.626063    5277 start.go:83] releasing machines lock for "no-preload-322000", held for 2.289592375s
	W0910 14:11:09.626128    5277 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:09.634286    5277 out.go:177] * Deleting "no-preload-322000" in qemu2 ...
	W0910 14:11:09.656604    5277 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:09.656650    5277 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:09.700122    5277 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0910 14:11:09.700179    5277 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 4.157471667s
	I0910 14:11:09.700204    5277 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0910 14:11:11.019621    5277 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0910 14:11:11.019671    5277 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 5.476820166s
	I0910 14:11:11.019722    5277 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0910 14:11:11.154713    5277 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0910 14:11:11.154772    5277 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 5.6120935s
	I0910 14:11:11.154807    5277 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0910 14:11:14.297348    5277 cache.go:157] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0910 14:11:14.297400    5277 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 8.754618083s
	I0910 14:11:14.297431    5277 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0910 14:11:14.297486    5277 cache.go:87] Successfully saved all images to host disk.
	I0910 14:11:14.658760    5277 start.go:365] acquiring machines lock for no-preload-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:14.694539    5277 start.go:369] acquired machines lock for "no-preload-322000" in 35.709125ms
	I0910 14:11:14.694696    5277 start.go:93] Provisioning new machine with config: &{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:14.694967    5277 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:14.707475    5277 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:14.753993    5277 start.go:159] libmachine.API.Create for "no-preload-322000" (driver="qemu2")
	I0910 14:11:14.754037    5277 client.go:168] LocalClient.Create starting
	I0910 14:11:14.754156    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:14.754208    5277 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:14.754234    5277 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:14.754321    5277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:14.754350    5277 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:14.754365    5277 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:14.754803    5277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:14.888987    5277 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:14.939201    5277 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:14.939209    5277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:14.939355    5277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:14.948206    5277 main.go:141] libmachine: STDOUT: 
	I0910 14:11:14.948224    5277 main.go:141] libmachine: STDERR: 
	I0910 14:11:14.948281    5277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2 +20000M
	I0910 14:11:14.956095    5277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:14.956113    5277 main.go:141] libmachine: STDERR: 
	I0910 14:11:14.956126    5277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:14.956135    5277 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:14.956189    5277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:18:19:53:13:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:14.957898    5277 main.go:141] libmachine: STDOUT: 
	I0910 14:11:14.957913    5277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:14.957925    5277 client.go:171] LocalClient.Create took 203.883709ms
	I0910 14:11:16.960095    5277 start.go:128] duration metric: createHost completed in 2.265095791s
	I0910 14:11:16.960176    5277 start.go:83] releasing machines lock for "no-preload-322000", held for 2.265603416s
	W0910 14:11:16.960578    5277 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:16.972273    5277 out.go:177] 
	W0910 14:11:16.979355    5277 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:16.979384    5277 out.go:239] * 
	* 
	W0910 14:11:16.982099    5277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:16.990257    5277 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (61.003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-409000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-409000 create -f testdata/busybox.yaml: exit status 1 (30.619625ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-409000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (33.871417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (33.878625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-409000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-409000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-409000 describe deploy/metrics-server -n kube-system: exit status 1 (26.539125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-409000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-409000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (28.583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-409000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-409000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.902422375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-409000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-409000 in cluster old-k8s-version-409000
	* Restarting existing qemu2 VM for "old-k8s-version-409000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-409000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:15.175395    5413 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:15.175515    5413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:15.175518    5413 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:15.175521    5413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:15.175645    5413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:15.176611    5413 out.go:303] Setting JSON to false
	I0910 14:11:15.191543    5413 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2450,"bootTime":1694377825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:15.191618    5413 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:15.196532    5413 out.go:177] * [old-k8s-version-409000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:15.203497    5413 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:15.207445    5413 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:15.203563    5413 notify.go:220] Checking for updates...
	I0910 14:11:15.213470    5413 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:15.216476    5413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:15.217866    5413 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:15.220476    5413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:15.223801    5413 config.go:182] Loaded profile config "old-k8s-version-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0910 14:11:15.227464    5413 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0910 14:11:15.230436    5413 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:15.234445    5413 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:11:15.241465    5413 start.go:298] selected driver: qemu2
	I0910 14:11:15.241472    5413 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-409000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:15.241546    5413 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:15.243588    5413 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:15.243621    5413 cni.go:84] Creating CNI manager for ""
	I0910 14:11:15.243629    5413 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 14:11:15.243634    5413 start_flags.go:321] config:
	{Name:old-k8s-version-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-409000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:15.247609    5413 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:15.253451    5413 out.go:177] * Starting control plane node old-k8s-version-409000 in cluster old-k8s-version-409000
	I0910 14:11:15.257440    5413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 14:11:15.257458    5413 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 14:11:15.257472    5413 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:15.257544    5413 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:15.257550    5413 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0910 14:11:15.257615    5413 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/old-k8s-version-409000/config.json ...
	I0910 14:11:15.257930    5413 start.go:365] acquiring machines lock for old-k8s-version-409000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:16.960311    5413 start.go:369] acquired machines lock for "old-k8s-version-409000" in 1.702355792s
	I0910 14:11:16.960573    5413 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:16.960611    5413 fix.go:54] fixHost starting: 
	I0910 14:11:16.961351    5413 fix.go:102] recreateIfNeeded on old-k8s-version-409000: state=Stopped err=<nil>
	W0910 14:11:16.961395    5413 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:16.976219    5413 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-409000" ...
	I0910 14:11:16.982497    5413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:fe:72:0d:76:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:16.991358    5413 main.go:141] libmachine: STDOUT: 
	I0910 14:11:16.991434    5413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:16.991556    5413 fix.go:56] fixHost completed within 30.953833ms
	I0910 14:11:16.991575    5413 start.go:83] releasing machines lock for "old-k8s-version-409000", held for 31.232417ms
	W0910 14:11:16.991614    5413 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:16.991759    5413 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:16.991775    5413 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:21.994025    5413 start.go:365] acquiring machines lock for old-k8s-version-409000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:21.994440    5413 start.go:369] acquired machines lock for "old-k8s-version-409000" in 271.417µs
	I0910 14:11:21.994593    5413 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:21.994613    5413 fix.go:54] fixHost starting: 
	I0910 14:11:21.995364    5413 fix.go:102] recreateIfNeeded on old-k8s-version-409000: state=Stopped err=<nil>
	W0910 14:11:21.995392    5413 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:22.000115    5413 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-409000" ...
	I0910 14:11:22.007008    5413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:fe:72:0d:76:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/old-k8s-version-409000/disk.qcow2
	I0910 14:11:22.015185    5413 main.go:141] libmachine: STDOUT: 
	I0910 14:11:22.015259    5413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:22.015334    5413 fix.go:56] fixHost completed within 20.721875ms
	I0910 14:11:22.015351    5413 start.go:83] releasing machines lock for "old-k8s-version-409000", held for 20.892791ms
	W0910 14:11:22.015509    5413 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-409000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-409000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:22.022836    5413 out.go:177] 
	W0910 14:11:22.026896    5413 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:22.026929    5413 out.go:239] * 
	* 
	W0910 14:11:22.029557    5413 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:22.037870    5413 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-409000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (67.738792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-322000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-322000 create -f testdata/busybox.yaml: exit status 1 (29.176042ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-322000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (28.520375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (29.152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-322000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-322000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-322000 describe deploy/metrics-server -n kube-system: exit status 1 (25.96175ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-322000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-322000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (28.794875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.169785834s)

                                                
                                                
-- stdout --
	* [no-preload-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-322000 in cluster no-preload-322000
	* Restarting existing qemu2 VM for "no-preload-322000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-322000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:17.448803    5438 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:17.448906    5438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:17.448909    5438 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:17.448911    5438 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:17.449018    5438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:17.449983    5438 out.go:303] Setting JSON to false
	I0910 14:11:17.464828    5438 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2452,"bootTime":1694377825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:17.464887    5438 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:17.469181    5438 out.go:177] * [no-preload-322000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:17.476166    5438 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:17.480141    5438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:17.476210    5438 notify.go:220] Checking for updates...
	I0910 14:11:17.487999    5438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:17.491181    5438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:17.494208    5438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:17.497249    5438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:17.500517    5438 config.go:182] Loaded profile config "no-preload-322000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:17.500759    5438 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:17.505181    5438 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:11:17.512211    5438 start.go:298] selected driver: qemu2
	I0910 14:11:17.512219    5438 start.go:902] validating driver "qemu2" against &{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-322000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:17.512278    5438 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:17.514279    5438 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:17.514312    5438 cni.go:84] Creating CNI manager for ""
	I0910 14:11:17.514318    5438 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:17.514324    5438 start_flags.go:321] config:
	{Name:no-preload-322000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-322000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:17.518255    5438 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.526190    5438 out.go:177] * Starting control plane node no-preload-322000 in cluster no-preload-322000
	I0910 14:11:17.530341    5438 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:17.530448    5438 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/no-preload-322000/config.json ...
	I0910 14:11:17.530449    5438 cache.go:107] acquiring lock: {Name:mk4231d1216272659720fd4f4c8c19fe0a02bfaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530451    5438 cache.go:107] acquiring lock: {Name:mk54fafb2c8726195c146a28b31a05730133ba38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530474    5438 cache.go:107] acquiring lock: {Name:mk011a162f1decfd7c0d2046a504a03cbd0c4977 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530515    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0910 14:11:17.530517    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0910 14:11:17.530521    5438 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.833µs
	I0910 14:11:17.530522    5438 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 91.291µs
	I0910 14:11:17.530528    5438 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0910 14:11:17.530528    5438 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0910 14:11:17.530533    5438 cache.go:107] acquiring lock: {Name:mkc090077c9b459dfa2fbdfe347de260b7afcd35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530539    5438 cache.go:107] acquiring lock: {Name:mk543f243b9df83639e4afe71b2c876000b6956e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530563    5438 cache.go:107] acquiring lock: {Name:mk338e718a434c07a2a609df243df74296b40193 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530575    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0910 14:11:17.530578    5438 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 46.125µs
	I0910 14:11:17.530583    5438 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0910 14:11:17.530581    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0910 14:11:17.530591    5438 cache.go:107] acquiring lock: {Name:mkd7df137ed915a3abcc8bcb655ec54fa854ba2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530606    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0910 14:11:17.530590    5438 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 51.834µs
	I0910 14:11:17.530610    5438 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 48.917µs
	I0910 14:11:17.530624    5438 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0910 14:11:17.530623    5438 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0910 14:11:17.530540    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0910 14:11:17.530635    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0910 14:11:17.530638    5438 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 48.125µs
	I0910 14:11:17.530645    5438 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0910 14:11:17.530636    5438 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 179.375µs
	I0910 14:11:17.530648    5438 cache.go:107] acquiring lock: {Name:mk9696d06492c8b8df03da27dc962c04433876be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:17.530650    5438 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0910 14:11:17.530706    5438 cache.go:115] /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0910 14:11:17.530711    5438 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 119.917µs
	I0910 14:11:17.530718    5438 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0910 14:11:17.530724    5438 cache.go:87] Successfully saved all images to host disk.
	I0910 14:11:17.530844    5438 start.go:365] acquiring machines lock for no-preload-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:17.530879    5438 start.go:369] acquired machines lock for "no-preload-322000" in 28.167µs
	I0910 14:11:17.530888    5438 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:17.530894    5438 fix.go:54] fixHost starting: 
	I0910 14:11:17.531015    5438 fix.go:102] recreateIfNeeded on no-preload-322000: state=Stopped err=<nil>
	W0910 14:11:17.531024    5438 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:17.539166    5438 out.go:177] * Restarting existing qemu2 VM for "no-preload-322000" ...
	I0910 14:11:17.543180    5438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:18:19:53:13:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:17.545007    5438 main.go:141] libmachine: STDOUT: 
	I0910 14:11:17.545027    5438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:17.545056    5438 fix.go:56] fixHost completed within 14.161208ms
	I0910 14:11:17.545061    5438 start.go:83] releasing machines lock for "no-preload-322000", held for 14.178416ms
	W0910 14:11:17.545068    5438 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:17.545094    5438 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:17.545099    5438 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:22.547163    5438 start.go:365] acquiring machines lock for no-preload-322000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:22.547243    5438 start.go:369] acquired machines lock for "no-preload-322000" in 64.833µs
	I0910 14:11:22.547269    5438 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:22.547272    5438 fix.go:54] fixHost starting: 
	I0910 14:11:22.547402    5438 fix.go:102] recreateIfNeeded on no-preload-322000: state=Stopped err=<nil>
	W0910 14:11:22.547408    5438 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:22.549972    5438 out.go:177] * Restarting existing qemu2 VM for "no-preload-322000" ...
	I0910 14:11:22.558039    5438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:18:19:53:13:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/no-preload-322000/disk.qcow2
	I0910 14:11:22.559754    5438 main.go:141] libmachine: STDOUT: 
	I0910 14:11:22.559771    5438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:22.559793    5438 fix.go:56] fixHost completed within 12.52ms
	I0910 14:11:22.559798    5438 start.go:83] releasing machines lock for "no-preload-322000", held for 12.548333ms
	W0910 14:11:22.559848    5438 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-322000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:22.566967    5438 out.go:177] 
	W0910 14:11:22.570014    5438 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:22.570027    5438 out.go:239] * 
	* 
	W0910 14:11:22.570523    5438 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:22.583023    5438 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-322000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (33.219208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-409000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (31.339209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-409000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.949ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-409000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (28.659041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-409000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-409000 "sudo crictl images -o json": exit status 89 (38.5195ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-409000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-409000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-409000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (28.100292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-409000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-409000 --alsologtostderr -v=1: exit status 89 (40.21925ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-409000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:22.301580    5457 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:22.301923    5457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:22.301927    5457 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:22.301929    5457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:22.302099    5457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:22.302284    5457 out.go:303] Setting JSON to false
	I0910 14:11:22.302292    5457 mustload.go:65] Loading cluster: old-k8s-version-409000
	I0910 14:11:22.302461    5457 config.go:182] Loaded profile config "old-k8s-version-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0910 14:11:22.305966    5457 out.go:177] * The control plane node must be running for this command
	I0910 14:11:22.310055    5457 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-409000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-409000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (28.808667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (28.906083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-322000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (29.819709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-322000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-322000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-322000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.581667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-322000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-322000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (32.06025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-322000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-322000 "sudo crictl images -o json": exit status 89 (46.399708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-322000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-322000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-322000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (31.184583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-701000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-701000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.922449708s)

                                                
                                                
-- stdout --
	* [embed-certs-701000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-701000 in cluster embed-certs-701000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-701000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:22.806020    5491 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:22.806124    5491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:22.806128    5491 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:22.806130    5491 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:22.806247    5491 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:22.807395    5491 out.go:303] Setting JSON to false
	I0910 14:11:22.824478    5491 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2457,"bootTime":1694377825,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:22.824527    5491 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:22.834994    5491 out.go:177] * [embed-certs-701000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:22.840971    5491 notify.go:220] Checking for updates...
	I0910 14:11:22.844933    5491 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:22.848894    5491 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:22.851971    5491 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:22.855928    5491 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:22.858936    5491 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:22.861995    5491 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:22.865225    5491 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:22.865287    5491 config.go:182] Loaded profile config "no-preload-322000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:22.865335    5491 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:22.869028    5491 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:11:22.875999    5491 start.go:298] selected driver: qemu2
	I0910 14:11:22.876006    5491 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:11:22.876012    5491 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:22.877990    5491 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:11:22.881954    5491 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:11:22.885039    5491 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:22.885060    5491 cni.go:84] Creating CNI manager for ""
	I0910 14:11:22.885067    5491 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:22.885071    5491 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:11:22.885075    5491 start_flags.go:321] config:
	{Name:embed-certs-701000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-701000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:22.890786    5491 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:22.898976    5491 out.go:177] * Starting control plane node embed-certs-701000 in cluster embed-certs-701000
	I0910 14:11:22.902975    5491 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:22.903006    5491 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:11:22.903033    5491 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:22.903112    5491 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:22.903117    5491 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:11:22.903205    5491 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/embed-certs-701000/config.json ...
	I0910 14:11:22.903217    5491 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/embed-certs-701000/config.json: {Name:mk02dcfba54721e55eb1a10ed3aa622a48084505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:11:22.903391    5491 start.go:365] acquiring machines lock for embed-certs-701000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:22.903417    5491 start.go:369] acquired machines lock for "embed-certs-701000" in 20.083µs
	I0910 14:11:22.903425    5491 start.go:93] Provisioning new machine with config: &{Name:embed-certs-701000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-701000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:22.903465    5491 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:22.913965    5491 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:22.928495    5491 start.go:159] libmachine.API.Create for "embed-certs-701000" (driver="qemu2")
	I0910 14:11:22.928533    5491 client.go:168] LocalClient.Create starting
	I0910 14:11:22.928611    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:22.928635    5491 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:22.928646    5491 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:22.928687    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:22.928709    5491 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:22.928715    5491 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:22.929042    5491 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:23.088438    5491 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:23.316191    5491 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:23.316201    5491 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:23.316341    5491 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:23.325759    5491 main.go:141] libmachine: STDOUT: 
	I0910 14:11:23.325787    5491 main.go:141] libmachine: STDERR: 
	I0910 14:11:23.325872    5491 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2 +20000M
	I0910 14:11:23.333924    5491 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:23.333939    5491 main.go:141] libmachine: STDERR: 
	I0910 14:11:23.333966    5491 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:23.333974    5491 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:23.334014    5491 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:a3:24:ca:34:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:23.335726    5491 main.go:141] libmachine: STDOUT: 
	I0910 14:11:23.335739    5491 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:23.335761    5491 client.go:171] LocalClient.Create took 407.2235ms
	I0910 14:11:25.337957    5491 start.go:128] duration metric: createHost completed in 2.43447275s
	I0910 14:11:25.338038    5491 start.go:83] releasing machines lock for "embed-certs-701000", held for 2.4346175s
	W0910 14:11:25.338132    5491 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:25.354265    5491 out.go:177] * Deleting "embed-certs-701000" in qemu2 ...
	W0910 14:11:25.369569    5491 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:25.369597    5491 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:30.371834    5491 start.go:365] acquiring machines lock for embed-certs-701000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:30.372299    5491 start.go:369] acquired machines lock for "embed-certs-701000" in 357.75µs
	I0910 14:11:30.372427    5491 start.go:93] Provisioning new machine with config: &{Name:embed-certs-701000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-701000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:30.372791    5491 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:30.381396    5491 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:30.427048    5491 start.go:159] libmachine.API.Create for "embed-certs-701000" (driver="qemu2")
	I0910 14:11:30.427101    5491 client.go:168] LocalClient.Create starting
	I0910 14:11:30.427213    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:30.427273    5491 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:30.427290    5491 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:30.427370    5491 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:30.427423    5491 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:30.427434    5491 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:30.427959    5491 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:30.558359    5491 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:30.636473    5491 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:30.636480    5491 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:30.636622    5491 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:30.645307    5491 main.go:141] libmachine: STDOUT: 
	I0910 14:11:30.645322    5491 main.go:141] libmachine: STDERR: 
	I0910 14:11:30.645374    5491 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2 +20000M
	I0910 14:11:30.652499    5491 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:30.652512    5491 main.go:141] libmachine: STDERR: 
	I0910 14:11:30.652524    5491 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:30.652530    5491 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:30.652573    5491 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6a:35:bc:fc:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:30.654061    5491 main.go:141] libmachine: STDOUT: 
	I0910 14:11:30.654075    5491 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:30.654088    5491 client.go:171] LocalClient.Create took 226.978958ms
	I0910 14:11:32.656282    5491 start.go:128] duration metric: createHost completed in 2.283419125s
	I0910 14:11:32.656336    5491 start.go:83] releasing machines lock for "embed-certs-701000", held for 2.284013667s
	W0910 14:11:32.656603    5491 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:32.671268    5491 out.go:177] 
	W0910 14:11:32.682774    5491 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:32.682818    5491 out.go:239] * 
	* 
	W0910 14:11:32.684578    5491 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:32.691224    5491 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-701000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (49.461417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-322000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-322000 --alsologtostderr -v=1: exit status 89 (44.409708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-322000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:22.822588    5493 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:22.822721    5493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:22.822723    5493 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:22.822725    5493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:22.822840    5493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:22.823076    5493 out.go:303] Setting JSON to false
	I0910 14:11:22.823084    5493 mustload.go:65] Loading cluster: no-preload-322000
	I0910 14:11:22.823280    5493 config.go:182] Loaded profile config "no-preload-322000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:22.828037    5493 out.go:177] * The control plane node must be running for this command
	I0910 14:11:22.834983    5493 out.go:177]   To start a cluster, run: "minikube start -p no-preload-322000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-322000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (33.426916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (37.980542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-322000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.415918417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-546000 in cluster default-k8s-diff-port-546000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-546000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:23.574169    5540 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:23.574297    5540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:23.574300    5540 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:23.574302    5540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:23.574406    5540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:23.575397    5540 out.go:303] Setting JSON to false
	I0910 14:11:23.590452    5540 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2458,"bootTime":1694377825,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:23.590548    5540 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:23.599771    5540 out.go:177] * [default-k8s-diff-port-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:23.602721    5540 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:23.602762    5540 notify.go:220] Checking for updates...
	I0910 14:11:23.606771    5540 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:23.613788    5540 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:23.616703    5540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:23.619778    5540 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:23.622820    5540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:23.626386    5540 config.go:182] Loaded profile config "embed-certs-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:23.626466    5540 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:23.626522    5540 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:23.630686    5540 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:11:23.636788    5540 start.go:298] selected driver: qemu2
	I0910 14:11:23.636795    5540 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:11:23.636803    5540 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:23.638793    5540 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 14:11:23.641731    5540 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:11:23.644883    5540 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:23.644906    5540 cni.go:84] Creating CNI manager for ""
	I0910 14:11:23.644925    5540 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:23.644933    5540 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:11:23.644942    5540 start_flags.go:321] config:
	{Name:default-k8s-diff-port-546000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:23.649103    5540 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:23.657791    5540 out.go:177] * Starting control plane node default-k8s-diff-port-546000 in cluster default-k8s-diff-port-546000
	I0910 14:11:23.661805    5540 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:23.661823    5540 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:11:23.661839    5540 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:23.661911    5540 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:23.661916    5540 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:11:23.661986    5540 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/default-k8s-diff-port-546000/config.json ...
	I0910 14:11:23.662003    5540 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/default-k8s-diff-port-546000/config.json: {Name:mked5e9ed8c822de18f2feb921bb0fb5be155b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:11:23.662205    5540 start.go:365] acquiring machines lock for default-k8s-diff-port-546000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:25.338152    5540 start.go:369] acquired machines lock for "default-k8s-diff-port-546000" in 1.675919834s
	I0910 14:11:25.338338    5540 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-546000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:25.338612    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:25.346152    5540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:25.390955    5540 start.go:159] libmachine.API.Create for "default-k8s-diff-port-546000" (driver="qemu2")
	I0910 14:11:25.390999    5540 client.go:168] LocalClient.Create starting
	I0910 14:11:25.391141    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:25.391203    5540 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:25.391231    5540 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:25.391321    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:25.391363    5540 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:25.391386    5540 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:25.391989    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:25.520893    5540 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:25.583630    5540 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:25.583636    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:25.583772    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:25.592457    5540 main.go:141] libmachine: STDOUT: 
	I0910 14:11:25.592471    5540 main.go:141] libmachine: STDERR: 
	I0910 14:11:25.592539    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2 +20000M
	I0910 14:11:25.599776    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:25.599793    5540 main.go:141] libmachine: STDERR: 
	I0910 14:11:25.599806    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:25.599810    5540 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:25.599843    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:71:b0:92:c9:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:25.601307    5540 main.go:141] libmachine: STDOUT: 
	I0910 14:11:25.601339    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:25.601355    5540 client.go:171] LocalClient.Create took 210.345792ms
	I0910 14:11:27.603542    5540 start.go:128] duration metric: createHost completed in 2.264902792s
	I0910 14:11:27.603638    5540 start.go:83] releasing machines lock for "default-k8s-diff-port-546000", held for 2.265458583s
	W0910 14:11:27.603726    5540 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:27.611160    5540 out.go:177] * Deleting "default-k8s-diff-port-546000" in qemu2 ...
	W0910 14:11:27.634232    5540 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:27.634262    5540 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:32.636501    5540 start.go:365] acquiring machines lock for default-k8s-diff-port-546000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:32.656528    5540 start.go:369] acquired machines lock for "default-k8s-diff-port-546000" in 19.833666ms
	I0910 14:11:32.656744    5540 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-546000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:32.657020    5540 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:32.666262    5540 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:32.712589    5540 start.go:159] libmachine.API.Create for "default-k8s-diff-port-546000" (driver="qemu2")
	I0910 14:11:32.712627    5540 client.go:168] LocalClient.Create starting
	I0910 14:11:32.712748    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:32.712796    5540 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:32.712813    5540 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:32.712877    5540 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:32.712906    5540 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:32.712921    5540 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:32.713366    5540 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:32.846007    5540 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:32.892168    5540 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:32.892176    5540 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:32.892307    5540 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:32.900882    5540 main.go:141] libmachine: STDOUT: 
	I0910 14:11:32.900900    5540 main.go:141] libmachine: STDERR: 
	I0910 14:11:32.900968    5540 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2 +20000M
	I0910 14:11:32.908916    5540 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:32.908942    5540 main.go:141] libmachine: STDERR: 
	I0910 14:11:32.908956    5540 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:32.908966    5540 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:32.909008    5540 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:61:74:d9:ba:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:32.910616    5540 main.go:141] libmachine: STDOUT: 
	I0910 14:11:32.910632    5540 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:32.910645    5540 client.go:171] LocalClient.Create took 198.01325ms
	I0910 14:11:34.912839    5540 start.go:128] duration metric: createHost completed in 2.255792333s
	I0910 14:11:34.912910    5540 start.go:83] releasing machines lock for "default-k8s-diff-port-546000", held for 2.256324792s
	W0910 14:11:34.913314    5540 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-546000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-546000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:34.929832    5540 out.go:177] 
	W0910 14:11:34.935935    5540 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:34.935982    5540 out.go:239] * 
	* 
	W0910 14:11:34.938481    5540 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:34.947811    5540 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (64.704834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-701000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-701000 create -f testdata/busybox.yaml: exit status 1 (30.376708ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-701000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (33.346584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (33.11475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-701000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-701000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-701000 describe deploy/metrics-server -n kube-system: exit status 1 (26.27525ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-701000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-701000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (28.46025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-701000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-701000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (6.886957792s)

                                                
                                                
-- stdout --
	* [embed-certs-701000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-701000 in cluster embed-certs-701000
	* Restarting existing qemu2 VM for "embed-certs-701000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-701000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:33.144089    5578 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:33.144213    5578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:33.144215    5578 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:33.144218    5578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:33.144324    5578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:33.145262    5578 out.go:303] Setting JSON to false
	I0910 14:11:33.160396    5578 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2468,"bootTime":1694377825,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:33.160464    5578 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:33.165708    5578 out.go:177] * [embed-certs-701000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:33.171719    5578 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:33.175676    5578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:33.171779    5578 notify.go:220] Checking for updates...
	I0910 14:11:33.182604    5578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:33.185671    5578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:33.188698    5578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:33.191658    5578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:33.195037    5578 config.go:182] Loaded profile config "embed-certs-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:33.195269    5578 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:33.199640    5578 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:11:33.206695    5578 start.go:298] selected driver: qemu2
	I0910 14:11:33.206704    5578 start.go:902] validating driver "qemu2" against &{Name:embed-certs-701000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-701000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:33.206777    5578 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:33.208803    5578 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:33.208832    5578 cni.go:84] Creating CNI manager for ""
	I0910 14:11:33.208839    5578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:33.208843    5578 start_flags.go:321] config:
	{Name:embed-certs-701000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-701000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:33.213256    5578 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:33.220653    5578 out.go:177] * Starting control plane node embed-certs-701000 in cluster embed-certs-701000
	I0910 14:11:33.224637    5578 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:33.224654    5578 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:11:33.224666    5578 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:33.224718    5578 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:33.224724    5578 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:11:33.224784    5578 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/embed-certs-701000/config.json ...
	I0910 14:11:33.225147    5578 start.go:365] acquiring machines lock for embed-certs-701000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:34.913052    5578 start.go:369] acquired machines lock for "embed-certs-701000" in 1.687884375s
	I0910 14:11:34.913200    5578 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:34.913237    5578 fix.go:54] fixHost starting: 
	I0910 14:11:34.913900    5578 fix.go:102] recreateIfNeeded on embed-certs-701000: state=Stopped err=<nil>
	W0910 14:11:34.913943    5578 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:34.925791    5578 out.go:177] * Restarting existing qemu2 VM for "embed-certs-701000" ...
	I0910 14:11:34.933270    5578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6a:35:bc:fc:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:34.942568    5578 main.go:141] libmachine: STDOUT: 
	I0910 14:11:34.942632    5578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:34.942736    5578 fix.go:56] fixHost completed within 29.5085ms
	I0910 14:11:34.942753    5578 start.go:83] releasing machines lock for "embed-certs-701000", held for 29.671083ms
	W0910 14:11:34.942787    5578 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:34.943003    5578 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:34.943020    5578 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:39.945322    5578 start.go:365] acquiring machines lock for embed-certs-701000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:39.945842    5578 start.go:369] acquired machines lock for "embed-certs-701000" in 390.875µs
	I0910 14:11:39.945987    5578 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:39.946008    5578 fix.go:54] fixHost starting: 
	I0910 14:11:39.946744    5578 fix.go:102] recreateIfNeeded on embed-certs-701000: state=Stopped err=<nil>
	W0910 14:11:39.946771    5578 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:39.955385    5578 out.go:177] * Restarting existing qemu2 VM for "embed-certs-701000" ...
	I0910 14:11:39.959553    5578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6a:35:bc:fc:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/embed-certs-701000/disk.qcow2
	I0910 14:11:39.968280    5578 main.go:141] libmachine: STDOUT: 
	I0910 14:11:39.968355    5578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:39.968446    5578 fix.go:56] fixHost completed within 22.438666ms
	I0910 14:11:39.968466    5578 start.go:83] releasing machines lock for "embed-certs-701000", held for 22.597584ms
	W0910 14:11:39.968640    5578 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-701000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-701000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:39.976292    5578 out.go:177] 
	W0910 14:11:39.980500    5578 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:39.980523    5578 out.go:239] * 
	* 
	W0910 14:11:39.983245    5578 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:39.991491    5578 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-701000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (66.020458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-546000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-546000 create -f testdata/busybox.yaml: exit status 1 (30.765208ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-546000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (29.535708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (28.620541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-546000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-546000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-546000 describe deploy/metrics-server -n kube-system: exit status 1 (26.147792ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-546000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-546000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (28.546041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.17318275s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-546000 in cluster default-k8s-diff-port-546000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-546000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:35.410666    5603 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:35.410779    5603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:35.410785    5603 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:35.410787    5603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:35.410898    5603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:35.411925    5603 out.go:303] Setting JSON to false
	I0910 14:11:35.427119    5603 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2470,"bootTime":1694377825,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:35.427188    5603 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:35.430692    5603 out.go:177] * [default-k8s-diff-port-546000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:35.438768    5603 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:35.442731    5603 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:35.438841    5603 notify.go:220] Checking for updates...
	I0910 14:11:35.449762    5603 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:35.452749    5603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:35.455726    5603 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:35.458757    5603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:35.461943    5603 config.go:182] Loaded profile config "default-k8s-diff-port-546000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:35.462195    5603 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:35.466741    5603 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:11:35.473689    5603 start.go:298] selected driver: qemu2
	I0910 14:11:35.473699    5603 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-546000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-546000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:35.473776    5603 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:35.475723    5603 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0910 14:11:35.475750    5603 cni.go:84] Creating CNI manager for ""
	I0910 14:11:35.475756    5603 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:35.475761    5603 start_flags.go:321] config:
	{Name:default-k8s-diff-port-546000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-5460
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:35.479740    5603 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:35.483785    5603 out.go:177] * Starting control plane node default-k8s-diff-port-546000 in cluster default-k8s-diff-port-546000
	I0910 14:11:35.490686    5603 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:35.490707    5603 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:11:35.490717    5603 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:35.490772    5603 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:35.490777    5603 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:11:35.490838    5603 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/default-k8s-diff-port-546000/config.json ...
	I0910 14:11:35.491188    5603 start.go:365] acquiring machines lock for default-k8s-diff-port-546000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:35.491215    5603 start.go:369] acquired machines lock for "default-k8s-diff-port-546000" in 19.959µs
	I0910 14:11:35.491224    5603 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:35.491229    5603 fix.go:54] fixHost starting: 
	I0910 14:11:35.491340    5603 fix.go:102] recreateIfNeeded on default-k8s-diff-port-546000: state=Stopped err=<nil>
	W0910 14:11:35.491348    5603 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:35.495711    5603 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-546000" ...
	I0910 14:11:35.503724    5603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:61:74:d9:ba:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:35.505635    5603 main.go:141] libmachine: STDOUT: 
	I0910 14:11:35.505652    5603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:35.505682    5603 fix.go:56] fixHost completed within 14.451917ms
	I0910 14:11:35.505691    5603 start.go:83] releasing machines lock for "default-k8s-diff-port-546000", held for 14.472208ms
	W0910 14:11:35.505697    5603 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:35.505734    5603 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:35.505739    5603 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:40.507315    5603 start.go:365] acquiring machines lock for default-k8s-diff-port-546000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:40.507408    5603 start.go:369] acquired machines lock for "default-k8s-diff-port-546000" in 66.916µs
	I0910 14:11:40.507430    5603 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:40.507433    5603 fix.go:54] fixHost starting: 
	I0910 14:11:40.507628    5603 fix.go:102] recreateIfNeeded on default-k8s-diff-port-546000: state=Stopped err=<nil>
	W0910 14:11:40.507633    5603 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:40.511815    5603 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-546000" ...
	I0910 14:11:40.519803    5603 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:61:74:d9:ba:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/default-k8s-diff-port-546000/disk.qcow2
	I0910 14:11:40.521556    5603 main.go:141] libmachine: STDOUT: 
	I0910 14:11:40.521569    5603 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:40.521591    5603 fix.go:56] fixHost completed within 14.15775ms
	I0910 14:11:40.521596    5603 start.go:83] releasing machines lock for "default-k8s-diff-port-546000", held for 14.183375ms
	W0910 14:11:40.521647    5603 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-546000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-546000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:40.529743    5603 out.go:177] 
	W0910 14:11:40.533833    5603 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:40.533840    5603 out.go:239] * 
	* 
	W0910 14:11:40.534294    5603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:40.542786    5603 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (31.756083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-701000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (32.497333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-701000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-701000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-701000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.839542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-701000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-701000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (28.517458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-701000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-701000 "sudo crictl images -o json": exit status 89 (38.500125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-701000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-701000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-701000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (28.062375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-701000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-701000 --alsologtostderr -v=1: exit status 89 (39.417042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-701000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:40.253834    5624 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:40.253997    5624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:40.254000    5624 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:40.254003    5624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:40.254108    5624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:40.254317    5624 out.go:303] Setting JSON to false
	I0910 14:11:40.254325    5624 mustload.go:65] Loading cluster: embed-certs-701000
	I0910 14:11:40.254485    5624 config.go:182] Loaded profile config "embed-certs-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:40.258827    5624 out.go:177] * The control plane node must be running for this command
	I0910 14:11:40.262958    5624 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-701000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-701000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (28.631209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (29.479291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-546000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (29.679875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-546000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-546000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-546000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.288208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-546000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-546000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (30.023834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-546000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-546000 "sudo crictl images -o json": exit status 89 (45.533375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-546000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-546000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-546000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (30.632542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-578000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-578000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.720081167s)

                                                
                                                
-- stdout --
	* [newest-cni-578000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-578000 in cluster newest-cni-578000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-578000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:40.763574    5658 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:40.763700    5658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:40.763703    5658 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:40.763706    5658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:40.763824    5658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:40.764839    5658 out.go:303] Setting JSON to false
	I0910 14:11:40.781933    5658 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2475,"bootTime":1694377825,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:40.781998    5658 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:40.786839    5658 out.go:177] * [newest-cni-578000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:40.796861    5658 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:40.793877    5658 notify.go:220] Checking for updates...
	I0910 14:11:40.814852    5658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:40.822831    5658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:40.829804    5658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:40.837757    5658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:40.840931    5658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:40.844132    5658 config.go:182] Loaded profile config "default-k8s-diff-port-546000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:40.844198    5658 config.go:182] Loaded profile config "multinode-362000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:40.844238    5658 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:40.847731    5658 out.go:177] * Using the qemu2 driver based on user configuration
	I0910 14:11:40.856835    5658 start.go:298] selected driver: qemu2
	I0910 14:11:40.856841    5658 start.go:902] validating driver "qemu2" against <nil>
	I0910 14:11:40.856852    5658 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:40.858952    5658 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0910 14:11:40.858982    5658 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0910 14:11:40.864764    5658 out.go:177] * Automatically selected the socket_vmnet network
	I0910 14:11:40.867868    5658 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 14:11:40.867890    5658 cni.go:84] Creating CNI manager for ""
	I0910 14:11:40.867897    5658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:40.867901    5658 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0910 14:11:40.867907    5658 start_flags.go:321] config:
	{Name:newest-cni-578000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-578000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:40.872054    5658 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:40.879779    5658 out.go:177] * Starting control plane node newest-cni-578000 in cluster newest-cni-578000
	I0910 14:11:40.882748    5658 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:40.882769    5658 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:11:40.882779    5658 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:40.882844    5658 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:40.882849    5658 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:11:40.882914    5658 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/newest-cni-578000/config.json ...
	I0910 14:11:40.882924    5658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/newest-cni-578000/config.json: {Name:mk8d4ca4d856f388f787a54f0981f70ad0b0a555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 14:11:40.883110    5658 start.go:365] acquiring machines lock for newest-cni-578000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:40.883135    5658 start.go:369] acquired machines lock for "newest-cni-578000" in 19.584µs
	I0910 14:11:40.883145    5658 start.go:93] Provisioning new machine with config: &{Name:newest-cni-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-578000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:40.883177    5658 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:40.886865    5658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:40.900849    5658 start.go:159] libmachine.API.Create for "newest-cni-578000" (driver="qemu2")
	I0910 14:11:40.900875    5658 client.go:168] LocalClient.Create starting
	I0910 14:11:40.900937    5658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:40.900962    5658 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:40.900972    5658 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:40.901012    5658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:40.901030    5658 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:40.901038    5658 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:40.901335    5658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:41.063741    5658 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:41.115862    5658 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:41.115873    5658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:41.116058    5658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:41.125126    5658 main.go:141] libmachine: STDOUT: 
	I0910 14:11:41.125145    5658 main.go:141] libmachine: STDERR: 
	I0910 14:11:41.125227    5658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2 +20000M
	I0910 14:11:41.132859    5658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:41.132874    5658 main.go:141] libmachine: STDERR: 
	I0910 14:11:41.132892    5658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:41.132907    5658 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:41.132949    5658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:95:ae:2f:44:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:41.134507    5658 main.go:141] libmachine: STDOUT: 
	I0910 14:11:41.134521    5658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:41.134541    5658 client.go:171] LocalClient.Create took 233.660958ms
	I0910 14:11:43.136782    5658 start.go:128] duration metric: createHost completed in 2.253579666s
	I0910 14:11:43.136848    5658 start.go:83] releasing machines lock for "newest-cni-578000", held for 2.253708958s
	W0910 14:11:43.136898    5658 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:43.148063    5658 out.go:177] * Deleting "newest-cni-578000" in qemu2 ...
	W0910 14:11:43.168265    5658 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:43.168298    5658 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:48.170574    5658 start.go:365] acquiring machines lock for newest-cni-578000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:48.171042    5658 start.go:369] acquired machines lock for "newest-cni-578000" in 360.333µs
	I0910 14:11:48.171204    5658 start.go:93] Provisioning new machine with config: &{Name:newest-cni-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-578000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0910 14:11:48.171480    5658 start.go:125] createHost starting for "" (driver="qemu2")
	I0910 14:11:48.182323    5658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0910 14:11:48.229118    5658 start.go:159] libmachine.API.Create for "newest-cni-578000" (driver="qemu2")
	I0910 14:11:48.229156    5658 client.go:168] LocalClient.Create starting
	I0910 14:11:48.229259    5658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/ca.pem
	I0910 14:11:48.229312    5658 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:48.229329    5658 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:48.229398    5658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17207-1093/.minikube/certs/cert.pem
	I0910 14:11:48.229442    5658 main.go:141] libmachine: Decoding PEM data...
	I0910 14:11:48.229462    5658 main.go:141] libmachine: Parsing certificate...
	I0910 14:11:48.230013    5658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso...
	I0910 14:11:48.356724    5658 main.go:141] libmachine: Creating SSH key...
	I0910 14:11:48.398058    5658 main.go:141] libmachine: Creating Disk image...
	I0910 14:11:48.398063    5658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0910 14:11:48.398200    5658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2.raw /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:48.406833    5658 main.go:141] libmachine: STDOUT: 
	I0910 14:11:48.406846    5658 main.go:141] libmachine: STDERR: 
	I0910 14:11:48.406897    5658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2 +20000M
	I0910 14:11:48.414096    5658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0910 14:11:48.414108    5658 main.go:141] libmachine: STDERR: 
	I0910 14:11:48.414121    5658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:48.414129    5658 main.go:141] libmachine: Starting QEMU VM...
	I0910 14:11:48.414172    5658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:75:31:30:af:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:48.415677    5658 main.go:141] libmachine: STDOUT: 
	I0910 14:11:48.415690    5658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:48.415705    5658 client.go:171] LocalClient.Create took 186.544625ms
	I0910 14:11:50.417889    5658 start.go:128] duration metric: createHost completed in 2.246393125s
	I0910 14:11:50.417935    5658 start.go:83] releasing machines lock for "newest-cni-578000", held for 2.246876125s
	W0910 14:11:50.418206    5658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-578000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:50.428640    5658 out.go:177] 
	W0910 14:11:50.432743    5658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:50.432755    5658 out.go:239] * 
	* 
	W0910 14:11:50.434180    5658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:50.443783    5658 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-578000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000: exit status 7 (71.894542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-546000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-546000 --alsologtostderr -v=1: exit status 89 (68.984ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-546000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:40.782464    5660 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:40.787092    5660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:40.787095    5660 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:40.787098    5660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:40.787234    5660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:40.791088    5660 out.go:303] Setting JSON to false
	I0910 14:11:40.791109    5660 mustload.go:65] Loading cluster: default-k8s-diff-port-546000
	I0910 14:11:40.791289    5660 config.go:182] Loaded profile config "default-k8s-diff-port-546000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:40.799756    5660 out.go:177] * The control plane node must be running for this command
	I0910 14:11:40.814855    5660 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-546000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-546000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (33.224958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (35.64525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-546000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-578000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-578000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.176286375s)

                                                
                                                
-- stdout --
	* [newest-cni-578000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-578000 in cluster newest-cni-578000
	* Restarting existing qemu2 VM for "newest-cni-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-578000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:50.777506    5708 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:50.777618    5708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:50.777620    5708 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:50.777623    5708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:50.777729    5708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:50.778660    5708 out.go:303] Setting JSON to false
	I0910 14:11:50.793950    5708 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2485,"bootTime":1694377825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 14:11:50.794018    5708 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 14:11:50.799007    5708 out.go:177] * [newest-cni-578000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 14:11:50.805968    5708 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 14:11:50.806008    5708 notify.go:220] Checking for updates...
	I0910 14:11:50.809923    5708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 14:11:50.813909    5708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 14:11:50.816951    5708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 14:11:50.819923    5708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 14:11:50.822877    5708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 14:11:50.826190    5708 config.go:182] Loaded profile config "newest-cni-578000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:50.826421    5708 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 14:11:50.830879    5708 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 14:11:50.837911    5708 start.go:298] selected driver: qemu2
	I0910 14:11:50.837918    5708 start.go:902] validating driver "qemu2" against &{Name:newest-cni-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-578000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:50.837981    5708 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 14:11:50.840160    5708 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0910 14:11:50.840190    5708 cni.go:84] Creating CNI manager for ""
	I0910 14:11:50.840199    5708 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 14:11:50.840206    5708 start_flags.go:321] config:
	{Name:newest-cni-578000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-578000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 14:11:50.845011    5708 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 14:11:50.851931    5708 out.go:177] * Starting control plane node newest-cni-578000 in cluster newest-cni-578000
	I0910 14:11:50.855913    5708 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 14:11:50.855932    5708 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 14:11:50.855950    5708 cache.go:57] Caching tarball of preloaded images
	I0910 14:11:50.856006    5708 preload.go:174] Found /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0910 14:11:50.856011    5708 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 14:11:50.856089    5708 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/newest-cni-578000/config.json ...
	I0910 14:11:50.856449    5708 start.go:365] acquiring machines lock for newest-cni-578000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:50.856474    5708 start.go:369] acquired machines lock for "newest-cni-578000" in 19.542µs
	I0910 14:11:50.856483    5708 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:50.856488    5708 fix.go:54] fixHost starting: 
	I0910 14:11:50.856606    5708 fix.go:102] recreateIfNeeded on newest-cni-578000: state=Stopped err=<nil>
	W0910 14:11:50.856614    5708 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:50.860912    5708 out.go:177] * Restarting existing qemu2 VM for "newest-cni-578000" ...
	I0910 14:11:50.868926    5708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:75:31:30:af:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:50.870716    5708 main.go:141] libmachine: STDOUT: 
	I0910 14:11:50.870732    5708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:50.870760    5708 fix.go:56] fixHost completed within 14.272583ms
	I0910 14:11:50.870765    5708 start.go:83] releasing machines lock for "newest-cni-578000", held for 14.287583ms
	W0910 14:11:50.870771    5708 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:50.870804    5708 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:50.870808    5708 start.go:687] Will try again in 5 seconds ...
	I0910 14:11:55.872919    5708 start.go:365] acquiring machines lock for newest-cni-578000: {Name:mk2d968ba86850e9efa8d8fd889bb6ef8c5f65ce Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0910 14:11:55.873244    5708 start.go:369] acquired machines lock for "newest-cni-578000" in 257.625µs
	I0910 14:11:55.873375    5708 start.go:96] Skipping create...Using existing machine configuration
	I0910 14:11:55.873403    5708 fix.go:54] fixHost starting: 
	I0910 14:11:55.874107    5708 fix.go:102] recreateIfNeeded on newest-cni-578000: state=Stopped err=<nil>
	W0910 14:11:55.874131    5708 fix.go:128] unexpected machine state, will restart: <nil>
	I0910 14:11:55.878586    5708 out.go:177] * Restarting existing qemu2 VM for "newest-cni-578000" ...
	I0910 14:11:55.886759    5708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:75:31:30:af:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17207-1093/.minikube/machines/newest-cni-578000/disk.qcow2
	I0910 14:11:55.894888    5708 main.go:141] libmachine: STDOUT: 
	I0910 14:11:55.894941    5708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0910 14:11:55.895019    5708 fix.go:56] fixHost completed within 21.61775ms
	I0910 14:11:55.895037    5708 start.go:83] releasing machines lock for "newest-cni-578000", held for 21.775333ms
	W0910 14:11:55.895179    5708 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-578000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-578000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0910 14:11:55.901504    5708 out.go:177] 
	W0910 14:11:55.904698    5708 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0910 14:11:55.904732    5708 out.go:239] * 
	* 
	W0910 14:11:55.907160    5708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 14:11:55.914549    5708 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-578000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000: exit status 7 (68.775583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-578000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-578000 "sudo crictl images -o json": exit status 89 (43.313708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-578000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-578000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-578000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000: exit status 7 (29.322542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-578000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-578000 --alsologtostderr -v=1: exit status 89 (42.235667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-578000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 14:11:56.097966    5722 out.go:296] Setting OutFile to fd 1 ...
	I0910 14:11:56.098095    5722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:56.098098    5722 out.go:309] Setting ErrFile to fd 2...
	I0910 14:11:56.098100    5722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 14:11:56.098214    5722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 14:11:56.098419    5722 out.go:303] Setting JSON to false
	I0910 14:11:56.098428    5722 mustload.go:65] Loading cluster: newest-cni-578000
	I0910 14:11:56.098593    5722 config.go:182] Loaded profile config "newest-cni-578000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 14:11:56.103455    5722 out.go:177] * The control plane node must be running for this command
	I0910 14:11:56.107557    5722 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-578000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-578000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000: exit status 7 (29.386459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-578000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000: exit status 7 (29.745875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-578000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (136/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.11
10 TestDownloadOnly/v1.28.1/json-events 16.36
11 TestDownloadOnly/v1.28.1/preload-exists 0
14 TestDownloadOnly/v1.28.1/kubectl 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.34
30 TestHyperKitDriverInstallOrUpdate 8.06
33 TestErrorSpam/setup 28.53
34 TestErrorSpam/start 0.34
35 TestErrorSpam/status 0.25
36 TestErrorSpam/pause 0.63
37 TestErrorSpam/unpause 0.61
38 TestErrorSpam/stop 3.23
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 46.25
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 35.53
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.06
49 TestFunctional/serial/CacheCmd/cache/add_remote 3.62
50 TestFunctional/serial/CacheCmd/cache/add_local 1.24
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 0.96
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.41
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
58 TestFunctional/serial/ExtraConfig 35.5
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.65
61 TestFunctional/serial/LogsFileCmd 0.65
62 TestFunctional/serial/InvalidService 4.36
64 TestFunctional/parallel/ConfigCmd 0.21
65 TestFunctional/parallel/DashboardCmd 8.86
66 TestFunctional/parallel/DryRun 0.22
67 TestFunctional/parallel/InternationalLanguage 0.1
68 TestFunctional/parallel/StatusCmd 0.25
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 25.5
76 TestFunctional/parallel/SSHCmd 0.13
77 TestFunctional/parallel/CpCmd 0.28
79 TestFunctional/parallel/FileSync 0.07
80 TestFunctional/parallel/CertSync 0.41
84 TestFunctional/parallel/NodeLabels 0.05
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
88 TestFunctional/parallel/License 0.2
89 TestFunctional/parallel/Version/short 0.04
90 TestFunctional/parallel/Version/components 0.17
91 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
92 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
93 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
94 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
95 TestFunctional/parallel/ImageCommands/ImageBuild 1.84
96 TestFunctional/parallel/ImageCommands/Setup 1.53
97 TestFunctional/parallel/DockerEnv/bash 0.38
98 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
99 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
100 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
101 TestFunctional/parallel/ServiceCmd/DeployApp 12.12
102 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.24
103 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.64
104 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.47
105 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
106 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
107 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
108 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
114 TestFunctional/parallel/ServiceCmd/List 0.09
115 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
117 TestFunctional/parallel/ServiceCmd/Format 0.11
118 TestFunctional/parallel/ServiceCmd/URL 0.1
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
126 TestFunctional/parallel/ProfileCmd/profile_list 0.15
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
128 TestFunctional/parallel/MountCmd/any-port 5.18
129 TestFunctional/parallel/MountCmd/specific-port 0.83
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 29.74
138 TestImageBuild/serial/NormalBuild 1.09
140 TestImageBuild/serial/BuildWithDockerIgnore 0.13
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
144 TestIngressAddonLegacy/StartLegacyK8sCluster 75.59
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.38
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.26
151 TestJSONOutput/start/Command 43.06
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.3
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.22
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 12.08
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.32
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 64.23
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.15
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
262 TestStartStop/group/old-k8s-version/serial/Stop 0.06
263 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
267 TestStartStop/group/no-preload/serial/Stop 0.06
268 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
284 TestStartStop/group/embed-certs/serial/Stop 0.06
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
289 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
290 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
304 TestStartStop/group/newest-cni/serial/Stop 0.06
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-556000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-556000: exit status 85 (104.746042ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-556000 | jenkins | v1.31.2 | 10 Sep 23 13:52 PDT |          |
	|         | -p download-only-556000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/10 13:52:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 13:52:07.346540    2202 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:52:07.346669    2202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:07.346672    2202 out.go:309] Setting ErrFile to fd 2...
	I0910 13:52:07.346675    2202 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:07.346787    2202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	W0910 13:52:07.346856    2202 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17207-1093/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17207-1093/.minikube/config/config.json: no such file or directory
	I0910 13:52:07.347963    2202 out.go:303] Setting JSON to true
	I0910 13:52:07.364181    2202 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1302,"bootTime":1694377825,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:52:07.364247    2202 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:52:07.370979    2202 out.go:97] [download-only-556000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:52:07.373814    2202 out.go:169] MINIKUBE_LOCATION=17207
	W0910 13:52:07.371131    2202 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball: no such file or directory
	I0910 13:52:07.371151    2202 notify.go:220] Checking for updates...
	I0910 13:52:07.380713    2202 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:52:07.383862    2202 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:52:07.386917    2202 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:52:07.389906    2202 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	W0910 13:52:07.395844    2202 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 13:52:07.396017    2202 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:52:07.401946    2202 out.go:97] Using the qemu2 driver based on user configuration
	I0910 13:52:07.401952    2202 start.go:298] selected driver: qemu2
	I0910 13:52:07.401954    2202 start.go:902] validating driver "qemu2" against <nil>
	I0910 13:52:07.402006    2202 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0910 13:52:07.405829    2202 out.go:169] Automatically selected the socket_vmnet network
	I0910 13:52:07.412223    2202 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0910 13:52:07.412303    2202 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0910 13:52:07.412368    2202 cni.go:84] Creating CNI manager for ""
	I0910 13:52:07.412384    2202 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0910 13:52:07.412392    2202 start_flags.go:321] config:
	{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-556000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:52:07.417886    2202 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 13:52:07.420959    2202 out.go:97] Downloading VM boot image ...
	I0910 13:52:07.420976    2202 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/iso/arm64/minikube-v1.31.0-1694081706-17207-arm64.iso
	I0910 13:52:16.283106    2202 out.go:97] Starting control plane node download-only-556000 in cluster download-only-556000
	I0910 13:52:16.283131    2202 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 13:52:16.346946    2202 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 13:52:16.347033    2202 cache.go:57] Caching tarball of preloaded images
	I0910 13:52:16.347188    2202 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 13:52:16.352220    2202 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0910 13:52:16.352227    2202 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:16.431413    2202 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0910 13:52:24.779793    2202 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:24.779912    2202 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:25.419300    2202 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0910 13:52:25.419487    2202 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/download-only-556000/config.json ...
	I0910 13:52:25.419505    2202 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/download-only-556000/config.json: {Name:mk4987b801c215dfc32a717379894ca3551848b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0910 13:52:25.419724    2202 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0910 13:52:25.419891    2202 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0910 13:52:25.752870    2202 out.go:169] 
	W0910 13:52:25.756815    2202 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68 0x106165f68] Decompressors:map[bz2:0x140004c1a78 gz:0x140004c1ad0 tar:0x140004c1a80 tar.bz2:0x140004c1a90 tar.gz:0x140004c1aa0 tar.xz:0x140004c1ab0 tar.zst:0x140004c1ac0 tbz2:0x140004c1a90 tgz:0x140004c1aa0 txz:0x140004c1ab0 tzst:0x140004c1ac0 xz:0x140004c1ad8 zip:0x140004c1ae0 zst:0x140004c1af0] Getters:map[file:0x14000be6ff0 http:0x14000778140 https:0x14000778190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0910 13:52:25.756843    2202 out_reason.go:110] 
	W0910 13:52:25.763891    2202 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0910 13:52:25.767786    2202 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-556000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (16.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-556000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 : (16.355061708s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (16.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
--- PASS: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-556000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-556000: exit status 85 (76.268584ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-556000 | jenkins | v1.31.2 | 10 Sep 23 13:52 PDT |          |
	|         | -p download-only-556000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-556000 | jenkins | v1.31.2 | 10 Sep 23 13:52 PDT |          |
	|         | -p download-only-556000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/10 13:52:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0910 13:52:25.965316    2212 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:52:25.965436    2212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:25.965439    2212 out.go:309] Setting ErrFile to fd 2...
	I0910 13:52:25.965441    2212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:52:25.965552    2212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	W0910 13:52:25.965610    2212 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17207-1093/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17207-1093/.minikube/config/config.json: no such file or directory
	I0910 13:52:25.966524    2212 out.go:303] Setting JSON to true
	I0910 13:52:25.981594    2212 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1320,"bootTime":1694377825,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:52:25.981659    2212 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:52:25.987134    2212 out.go:97] [download-only-556000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:52:25.990039    2212 out.go:169] MINIKUBE_LOCATION=17207
	I0910 13:52:25.987217    2212 notify.go:220] Checking for updates...
	I0910 13:52:25.998102    2212 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:52:26.001128    2212 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:52:26.004075    2212 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:52:26.007120    2212 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	W0910 13:52:26.013053    2212 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0910 13:52:26.013299    2212 config.go:182] Loaded profile config "download-only-556000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0910 13:52:26.013329    2212 start.go:810] api.Load failed for download-only-556000: filestore "download-only-556000": Docker machine "download-only-556000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0910 13:52:26.013375    2212 driver.go:373] Setting default libvirt URI to qemu:///system
	W0910 13:52:26.013388    2212 start.go:810] api.Load failed for download-only-556000: filestore "download-only-556000": Docker machine "download-only-556000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0910 13:52:26.017044    2212 out.go:97] Using the qemu2 driver based on existing profile
	I0910 13:52:26.017053    2212 start.go:298] selected driver: qemu2
	I0910 13:52:26.017055    2212 start.go:902] validating driver "qemu2" against &{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-556000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:52:26.020201    2212 cni.go:84] Creating CNI manager for ""
	I0910 13:52:26.020221    2212 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0910 13:52:26.020228    2212 start_flags.go:321] config:
	{Name:download-only-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-556000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:52:26.023983    2212 iso.go:125] acquiring lock: {Name:mk7b64d48bce9081530bc49bdfed6972c82152dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0910 13:52:26.027968    2212 out.go:97] Starting control plane node download-only-556000 in cluster download-only-556000
	I0910 13:52:26.027973    2212 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:52:26.080049    2212 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 13:52:26.080072    2212 cache.go:57] Caching tarball of preloaded images
	I0910 13:52:26.080203    2212 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:52:26.086629    2212 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0910 13:52:26.086636    2212 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:26.165845    2212 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0910 13:52:35.964993    2212 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:35.965129    2212 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0910 13:52:36.544206    2212 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0910 13:52:36.544274    2212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/download-only-556000/config.json ...
	I0910 13:52:36.544558    2212 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0910 13:52:36.544702    2212 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17207-1093/.minikube/cache/darwin/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-556000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-556000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-823000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-823000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.06s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.06s)

                                                
                                    
x
+
TestErrorSpam/setup (28.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-885000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-885000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 --driver=qemu2 : (28.529791625s)
--- PASS: TestErrorSpam/setup (28.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 stop: (3.070073584s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-885000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-885000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17207-1093/.minikube/files/etc/test/nested/copy/2200/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-765000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-765000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (46.251625833s)
--- PASS: TestFunctional/serial/StartWithProxy (46.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-765000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-765000 --alsologtostderr -v=8: (35.527922333s)
functional_test.go:659: soft start took 35.528328s for "functional-765000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-765000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-765000 cache add registry.k8s.io/pause:3.1: (1.30591875s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-765000 cache add registry.k8s.io/pause:3.3: (1.196768584s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-765000 cache add registry.k8s.io/pause:latest: (1.118372834s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local812612375/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cache add minikube-local-cache-test:functional-765000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cache delete minikube-local-cache-test:functional-765000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-765000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (69.735792ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 kubectl -- --context functional-765000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-765000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-765000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-765000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.499603917s)
functional_test.go:757: restart took 35.499725375s for "functional-765000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-765000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3363336311/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-765000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-765000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-765000: exit status 115 (124.185292ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30823 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-765000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-765000 delete -f testdata/invalidsvc.yaml: (1.111558417s)
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 config get cpus: exit status 14 (29.4725ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 config get cpus: exit status 14 (28.888125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-765000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-765000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2905: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-765000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-765000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.043417ms)

                                                
                                                
-- stdout --
	* [functional-765000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 13:57:01.731974    2888 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:57:01.732095    2888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:01.732098    2888 out.go:309] Setting ErrFile to fd 2...
	I0910 13:57:01.732100    2888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:01.732219    2888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 13:57:01.733153    2888 out.go:303] Setting JSON to false
	I0910 13:57:01.749925    2888 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1596,"bootTime":1694377825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:57:01.749974    2888 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:57:01.755080    2888 out.go:177] * [functional-765000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0910 13:57:01.762183    2888 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 13:57:01.766068    2888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:57:01.762286    2888 notify.go:220] Checking for updates...
	I0910 13:57:01.772078    2888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:57:01.775173    2888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:57:01.778123    2888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 13:57:01.779519    2888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 13:57:01.783320    2888 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:57:01.783558    2888 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:57:01.788253    2888 out.go:177] * Using the qemu2 driver based on existing profile
	I0910 13:57:01.793054    2888 start.go:298] selected driver: qemu2
	I0910 13:57:01.793063    2888 start.go:902] validating driver "qemu2" against &{Name:functional-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-765000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:57:01.793118    2888 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 13:57:01.799119    2888 out.go:177] 
	W0910 13:57:01.803080    2888 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0910 13:57:01.807100    2888 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-765000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-765000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-765000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (101.96225ms)

                                                
                                                
-- stdout --
	* [functional-765000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0910 13:57:01.943249    2899 out.go:296] Setting OutFile to fd 1 ...
	I0910 13:57:01.943351    2899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:01.943355    2899 out.go:309] Setting ErrFile to fd 2...
	I0910 13:57:01.943357    2899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0910 13:57:01.943491    2899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
	I0910 13:57:01.944764    2899 out.go:303] Setting JSON to false
	I0910 13:57:01.960811    2899 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1596,"bootTime":1694377825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0910 13:57:01.960899    2899 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0910 13:57:01.964164    2899 out.go:177] * [functional-765000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0910 13:57:01.970133    2899 out.go:177]   - MINIKUBE_LOCATION=17207
	I0910 13:57:01.974107    2899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	I0910 13:57:01.970143    2899 notify.go:220] Checking for updates...
	I0910 13:57:01.979977    2899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0910 13:57:01.983126    2899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0910 13:57:01.986122    2899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	I0910 13:57:01.987533    2899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0910 13:57:01.990383    2899 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0910 13:57:01.990609    2899 driver.go:373] Setting default libvirt URI to qemu:///system
	I0910 13:57:01.995091    2899 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0910 13:57:02.000136    2899 start.go:298] selected driver: qemu2
	I0910 13:57:02.000141    2899 start.go:902] validating driver "qemu2" against &{Name:functional-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-765000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0910 13:57:02.000189    2899 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0910 13:57:02.006163    2899 out.go:177] 
	W0910 13:57:02.010055    2899 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0910 13:57:02.014092    2899 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [43600b09-a295-4a4f-8094-df7394d753b7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006623208s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-765000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-765000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-765000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-765000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [57019ffd-2aa4-42c4-bf06-c09b510106fb] Pending
helpers_test.go:344: "sp-pod" [57019ffd-2aa4-42c4-bf06-c09b510106fb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [57019ffd-2aa4-42c4-bf06-c09b510106fb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.009595917s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-765000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-765000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-765000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d2267722-85f0-4971-ad9f-0063c3823902] Pending
helpers_test.go:344: "sp-pod" [d2267722-85f0-4971-ad9f-0063c3823902] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d2267722-85f0-4971-ad9f-0063c3823902] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006898625s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-765000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh -n functional-765000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 cp functional-765000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3468669269/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh -n functional-765000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2200/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo cat /etc/test/nested/copy/2200/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2200.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo cat /etc/ssl/certs/2200.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2200.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo cat /usr/share/ca-certificates/2200.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/22002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo cat /etc/ssl/certs/22002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/22002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo cat /usr/share/ca-certificates/22002.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-765000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "sudo systemctl is-active crio": exit status 1 (121.792125ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-765000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-765000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-765000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-765000 image ls --format short --alsologtostderr:
I0910 13:57:11.254265    2942 out.go:296] Setting OutFile to fd 1 ...
I0910 13:57:11.254617    2942 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.254621    2942 out.go:309] Setting ErrFile to fd 2...
I0910 13:57:11.254623    2942 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.254747    2942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
I0910 13:57:11.255114    2942 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.255179    2942 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.256056    2942 ssh_runner.go:195] Run: systemctl --version
I0910 13:57:11.256066    2942 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
I0910 13:57:11.288196    2942 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-765000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 91582cfffc2d0 | 192MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 812f5241df7fd | 68.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-765000 | 97b1b8e37c943 | 30B    |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 8b6e1980b7584 | 116MB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/google-containers/addon-resizer      | functional-765000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.28.1           | b29fb62480892 | 119MB  |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b4a5a57e99492 | 57.8MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-765000 image ls --format table --alsologtostderr:
I0910 13:57:11.409432    2951 out.go:296] Setting OutFile to fd 1 ...
I0910 13:57:11.409543    2951 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.409547    2951 out.go:309] Setting ErrFile to fd 2...
I0910 13:57:11.409549    2951 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.409671    2951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
I0910 13:57:11.410075    2951 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.410137    2951 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.410931    2951 ssh_runner.go:195] Run: systemctl --version
I0910 13:57:11.410941    2951 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
I0910 13:57:11.441763    2951 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-765000 image ls --format json --alsologtostderr:
[{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"57800000"},{"id":"91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-765000"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"116000000"},{"id":"9cdd6470f48c8b127
530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"97b1b8e37c943fad276dd3168232548f494d2029b9d842128832ebbb2af63a9f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-765000"],"size":"30"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"b29fb62480892633ac479243b9841b88f
9ae30865773fd76b97522541cd5633a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"119000000"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"68300000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm
:1.8"],"size":"85000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-765000 image ls --format json --alsologtostderr:
I0910 13:57:11.335556    2947 out.go:296] Setting OutFile to fd 1 ...
I0910 13:57:11.335653    2947 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.335656    2947 out.go:309] Setting ErrFile to fd 2...
I0910 13:57:11.335658    2947 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.335790    2947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
I0910 13:57:11.336172    2947 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.336229    2947 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.336995    2947 ssh_runner.go:195] Run: systemctl --version
I0910 13:57:11.337005    2947 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
I0910 13:57:11.366896    2947 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-765000 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 97b1b8e37c943fad276dd3168232548f494d2029b9d842128832ebbb2af63a9f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-765000
size: "30"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "119000000"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "57800000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "68300000"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "116000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-765000
size: "32900000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-765000 image ls --format yaml --alsologtostderr:
I0910 13:57:11.254317    2943 out.go:296] Setting OutFile to fd 1 ...
I0910 13:57:11.254617    2943 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.254621    2943 out.go:309] Setting ErrFile to fd 2...
I0910 13:57:11.254623    2943 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.254746    2943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
I0910 13:57:11.255099    2943 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.255157    2943 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.256593    2943 ssh_runner.go:195] Run: systemctl --version
I0910 13:57:11.256601    2943 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
I0910 13:57:11.287254    2943 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh pgrep buildkitd: exit status 1 (65.297292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image build -t localhost/my-image:functional-765000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-765000 image build -t localhost/my-image:functional-765000 testdata/build --alsologtostderr: (1.689804792s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-765000 image build -t localhost/my-image:functional-765000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in ef8d16f41283
Removing intermediate container ef8d16f41283
---> 46a6741a09be
Step 3/3 : ADD content.txt /
---> f956e4d11fdd
Successfully built f956e4d11fdd
Successfully tagged localhost/my-image:functional-765000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-765000 image build -t localhost/my-image:functional-765000 testdata/build --alsologtostderr:
I0910 13:57:11.398285    2950 out.go:296] Setting OutFile to fd 1 ...
I0910 13:57:11.398503    2950 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.398506    2950 out.go:309] Setting ErrFile to fd 2...
I0910 13:57:11.398509    2950 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0910 13:57:11.398651    2950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17207-1093/.minikube/bin
I0910 13:57:11.399043    2950 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.399439    2950 config.go:182] Loaded profile config "functional-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0910 13:57:11.400374    2950 ssh_runner.go:195] Run: systemctl --version
I0910 13:57:11.400383    2950 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17207-1093/.minikube/machines/functional-765000/id_rsa Username:docker}
I0910 13:57:11.430858    2950 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2170047208.tar
I0910 13:57:11.430921    2950 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0910 13:57:11.433717    2950 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2170047208.tar
I0910 13:57:11.435102    2950 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2170047208.tar: stat -c "%s %y" /var/lib/minikube/build/build.2170047208.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2170047208.tar': No such file or directory
I0910 13:57:11.435119    2950 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2170047208.tar --> /var/lib/minikube/build/build.2170047208.tar (3072 bytes)
I0910 13:57:11.443924    2950 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2170047208
I0910 13:57:11.449148    2950 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2170047208 -xf /var/lib/minikube/build/build.2170047208.tar
I0910 13:57:11.452311    2950 docker.go:339] Building image: /var/lib/minikube/build/build.2170047208
I0910 13:57:11.452355    2950 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-765000 /var/lib/minikube/build/build.2170047208
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0910 13:57:13.032674    2950 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-765000 /var/lib/minikube/build/build.2170047208: (1.580325s)
I0910 13:57:13.032734    2950 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2170047208
I0910 13:57:13.035858    2950 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2170047208.tar
I0910 13:57:13.038569    2950 build_images.go:207] Built localhost/my-image:functional-765000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2170047208.tar
I0910 13:57:13.038583    2950 build_images.go:123] succeeded building to: functional-765000
I0910 13:57:13.038586    2950 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.480780916s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-765000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-765000 docker-env) && out/minikube-darwin-arm64 status -p functional-765000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-765000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-765000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-765000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-msrgv" [04719a35-de65-4ee6-9fa2-d410766df0a2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-msrgv" [04719a35-de65-4ee6-9fa2-d410766df0a2] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.020422708s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image load --daemon gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-765000 image load --daemon gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr: (2.161795s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image load --daemon gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-765000 image load --daemon gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr: (1.561562291s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.405520166s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-765000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image load --daemon gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-765000 image load --daemon gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr: (1.95213325s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image save gcr.io/google-containers/addon-resizer:functional-765000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image rm gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-765000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 image save --daemon gcr.io/google-containers/addon-resizer:functional-765000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-765000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-765000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [acd010b8-f720-4fc0-b934-5535906d7509] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [acd010b8-f720-4fc0-b934-5535906d7509] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00638875s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 service list -o json
functional_test.go:1493: Took "90.749375ms" to run "out/minikube-darwin-arm64 -p functional-765000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31632
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31632
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-765000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.120.11 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-765000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "116.487958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.858ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "115.015417ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "34.582583ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3013722247/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694379411616744000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3013722247/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694379411616744000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3013722247/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694379411616744000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3013722247/001/test-1694379411616744000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.546958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 10 20:56 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 10 20:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 10 20:56 test-1694379411616744000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh cat /mount-9p/test-1694379411616744000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-765000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6e3332bb-c39a-44c6-a9d8-5093d86f2520] Pending
helpers_test.go:344: "busybox-mount" [6e3332bb-c39a-44c6-a9d8-5093d86f2520] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6e3332bb-c39a-44c6-a9d8-5093d86f2520] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6e3332bb-c39a-44c6-a9d8-5093d86f2520] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008866583s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-765000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3013722247/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1765699636/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.830625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1765699636/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "sudo umount -f /mount-9p": exit status 1 (64.15725ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-765000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1765699636/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.83s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-765000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-765000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-765000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-050000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-050000 --driver=qemu2 : (29.743594s)
--- PASS: TestImageBuild/serial/Setup (29.74s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-050000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-050000: (1.093970542s)
--- PASS: TestImageBuild/serial/NormalBuild (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-050000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-050000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (75.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-065000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-065000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m15.588398042s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (75.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 addons enable ingress --alsologtostderr -v=5: (13.38361875s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-065000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-498000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-498000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (43.064581416s)
--- PASS: TestJSONOutput/start/Command (43.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.3s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-498000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-498000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-498000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-498000 --output=json --user=testUser: (12.075856375s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-292000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-292000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.893833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30611d32-871a-42ed-8380-f96025ca1d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-292000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a7604bf-819f-4b6e-9468-916e2612c500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17207"}}
	{"specversion":"1.0","id":"03de3b6d-002f-4059-bb72-3fc0c7b673fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig"}}
	{"specversion":"1.0","id":"4b007c2c-c889-4f68-a56c-41feb96aa9e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b16ab974-b48a-416d-93b4-4db50d2ad322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1a60eaed-f43b-402f-9399-ef62c513530e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube"}}
	{"specversion":"1.0","id":"69ac522b-29a1-44f7-8c84-26a3dc878fad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"56083011-ff62-465b-9ec2-e318504dcf67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-292000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (64.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-137000 --driver=qemu2 
E0910 14:01:12.903124    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:12.911596    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:12.923725    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:12.945856    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:12.987939    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:13.070027    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:13.232325    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:13.554483    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:14.196575    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:15.478596    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:18.040714    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:23.163000    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
E0910 14:01:33.405236    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-137000 --driver=qemu2 : (29.957106583s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-138000 --driver=qemu2 
E0910 14:01:53.887553    2200 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17207-1093/.minikube/profiles/functional-765000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-138000 --driver=qemu2 : (33.497860708s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-137000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-138000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-138000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-138000
helpers_test.go:175: Cleaning up "first-137000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-137000
--- PASS: TestMinikubeProfile (64.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-235000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.820083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-235000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17207
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17207-1093/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17207-1093/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-235000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-235000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.364833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-235000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-235000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-235000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-235000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.240291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-235000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-409000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-409000 -n old-k8s-version-409000: exit status 7 (28.54375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-409000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-322000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-322000 -n no-preload-322000: exit status 7 (28.597667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-322000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-701000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-701000 -n embed-certs-701000: exit status 7 (29.184042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-701000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-546000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (28.551916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-546000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-578000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-578000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-578000 -n newest-cni-578000: exit status 7 (29.725375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-578000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1: exit status 1 (69.313375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3: exit status 1 (61.312959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3: exit status 1 (62.173ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3: exit status 1 (67.284084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3: exit status 1 (62.355791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3: exit status 1 (62.19825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount1
2023/09/10 13:57:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-765000 ssh "findmnt -T" /mount3: exit status 1 (61.517167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-765000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup253137323/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.53s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-322000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-322000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-322000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-322000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-322000"

                                                
                                                
----------------------- debugLogs end: cilium-322000 [took: 2.161989916s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-322000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-322000
--- SKIP: TestNetworkPlugins/group/cilium (2.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-022000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-022000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard