Test Report: QEMU_macOS 17225

                    
                      8d920970282225f83d426c443f886ca4d2c7eb6f:2023-09-11:30960
                    
                

Test fail (122/248)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.31
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.87
22 TestAddons/Setup 42.93
23 TestCertOptions 9.92
24 TestCertExpiration 195.2
25 TestDockerFlags 10.06
26 TestForceSystemdFlag 10.07
27 TestForceSystemdEnv 10.36
42 TestFunctional/serial/StartWithProxy 79.15
44 TestFunctional/serial/SoftStart 120.21
45 TestFunctional/serial/KubeContext 0.1
46 TestFunctional/serial/KubectlGetPods 0.1
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 60.05
54 TestFunctional/serial/CacheCmd/cache/cache_reload 301.22
56 TestFunctional/serial/MinikubeKubectlCmd 0.53
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
58 TestFunctional/serial/ExtraConfig 118.91
59 TestFunctional/serial/ComponentHealth 0.1
61 TestFunctional/serial/LogsFileCmd 180.75
62 TestFunctional/serial/InvalidService 0.05
65 TestFunctional/parallel/DashboardCmd 0.27
68 TestFunctional/parallel/StatusCmd 0.29
72 TestFunctional/parallel/ServiceCmdConnect 0.18
74 TestFunctional/parallel/PersistentVolumeClaim 0.12
80 TestFunctional/parallel/CertSync 0.46
84 TestFunctional/parallel/NodeLabels 0.14
91 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
92 TestFunctional/parallel/ImageCommands/ImageListTable 60.14
93 TestFunctional/parallel/ImageCommands/ImageListJson 60.14
94 TestFunctional/parallel/ImageCommands/ImageListYaml 60.12
95 TestFunctional/parallel/ImageCommands/ImageBuild 120.22
97 TestFunctional/parallel/DockerEnv/bash 300.16
98 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 118.32
99 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.52
100 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.19
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
105 TestFunctional/parallel/ServiceCmd/List 0.08
106 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
107 TestFunctional/parallel/ServiceCmd/HTTPS 0.08
108 TestFunctional/parallel/ServiceCmd/Format 0.08
109 TestFunctional/parallel/ServiceCmd/URL 0.07
111 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.11
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 83.13
116 TestFunctional/parallel/ImageCommands/ImageSaveToFile 60.14
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.12
125 TestFunctional/parallel/MountCmd/any-port 1.3
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.11
138 TestImageBuild/serial/BuildWithBuildArg 1.04
147 TestIngressAddonLegacy/serial/ValidateIngressAddons 57.08
179 TestMinikubeProfile 18.12
187 TestMountStart/serial/VerifyMountPostDelete 101.04
196 TestMultiNode/serial/StopNode 378.18
197 TestMultiNode/serial/StartAfterStop 230.16
198 TestMultiNode/serial/RestartKeepsNodes 41.51
199 TestMultiNode/serial/DeleteNode 0.1
200 TestMultiNode/serial/StopMultiNode 0.17
201 TestMultiNode/serial/RestartMultiNode 5.23
202 TestMultiNode/serial/ValidateNameConflict 10.47
206 TestPreload 9.85
208 TestScheduledStopUnix 9.85
209 TestSkaffold 11.92
212 TestRunningBinaryUpgrade 138.23
214 TestKubernetesUpgrade 15.32
227 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.46
228 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.83
229 TestStoppedBinaryUpgrade/Setup 153.15
231 TestPause/serial/Start 10.04
241 TestNoKubernetes/serial/StartWithK8s 9.81
242 TestNoKubernetes/serial/StartWithStopK8s 5.31
243 TestNoKubernetes/serial/Start 5.32
247 TestNoKubernetes/serial/StartNoArgs 5.3
249 TestNetworkPlugins/group/auto/Start 9.64
250 TestNetworkPlugins/group/kindnet/Start 9.77
251 TestNetworkPlugins/group/calico/Start 9.68
252 TestNetworkPlugins/group/custom-flannel/Start 9.69
253 TestNetworkPlugins/group/false/Start 9.81
254 TestNetworkPlugins/group/enable-default-cni/Start 9.84
255 TestNetworkPlugins/group/flannel/Start 9.84
256 TestNetworkPlugins/group/bridge/Start 9.71
257 TestStoppedBinaryUpgrade/Upgrade 3.53
258 TestStoppedBinaryUpgrade/MinikubeLogs 0.08
259 TestNetworkPlugins/group/kubenet/Start 10.02
261 TestStartStop/group/old-k8s-version/serial/FirstStart 10.92
263 TestStartStop/group/no-preload/serial/FirstStart 10.25
264 TestStartStop/group/old-k8s-version/serial/DeployApp 0.14
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.17
268 TestStartStop/group/old-k8s-version/serial/SecondStart 6.99
269 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
270 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
271 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
272 TestStartStop/group/old-k8s-version/serial/Pause 0.1
274 TestStartStop/group/embed-certs/serial/FirstStart 11.81
275 TestStartStop/group/no-preload/serial/DeployApp 0.18
276 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
279 TestStartStop/group/no-preload/serial/SecondStart 7.02
280 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
281 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
282 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
283 TestStartStop/group/no-preload/serial/Pause 0.1
285 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.45
286 TestStartStop/group/embed-certs/serial/DeployApp 0.12
287 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.2
290 TestStartStop/group/embed-certs/serial/SecondStart 7.06
291 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
292 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.18
294 TestStartStop/group/embed-certs/serial/Pause 0.1
296 TestStartStop/group/newest-cni/serial/FirstStart 11.4
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.24
301 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.93
302 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
305 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
310 TestStartStop/group/newest-cni/serial/SecondStart 5.24
313 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (13.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-074000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-074000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.310167375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d76f4f5d-f16a-431a-b16a-2ab295ca1ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-074000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91127c9a-6e27-4e55-a862-4915f5b195a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17225"}}
	{"specversion":"1.0","id":"bca46052-0635-4218-bee5-9af193535664","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig"}}
	{"specversion":"1.0","id":"226115cf-f3fa-48b9-b269-f90e816c879d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b88b3cd3-7db2-4907-b42c-576c8779ddc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66f301ca-9060-476b-89d1-e9338ea2e409","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube"}}
	{"specversion":"1.0","id":"b70e3421-eb72-410b-a57d-36af52fc9123","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"bdf15dfb-569c-44f1-8137-2968af73a44a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"61d24a41-16bb-414f-aa17-41f411757298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"1b3278dd-2e00-4dd5-8b00-9091fdf95847","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"825a4bcd-acd6-4589-bb58-4b6501d34dbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-074000 in cluster download-only-074000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"edcb6b75-b09f-42d1-9718-fa9c65759640","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7ffb3a3-a5e1-4c49-ad68-a24a8bbd7214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68] Decompressors:map[bz2:0x1400011e588 gz:0x1400011e5e0 tar:0x1400011e590 tar.bz2:0x1400011e5a0 tar.gz:0x1400011e5b0 tar.xz:0x1400011e5c0 tar.zst:0x1400011e5d0 tbz2:0x1400011e5a0 tgz:0x1400011
e5b0 txz:0x1400011e5c0 tzst:0x1400011e5d0 xz:0x1400011e5e8 zip:0x1400011e5f0 zst:0x1400011e600] Getters:map[file:0x140005cc5b0 http:0x14000dca640 https:0x14000dca690] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"ffaf61a5-d6dd-4cbc-ba58-133ee9051aaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 03:33:27.541399    1395 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:33:27.541522    1395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:27.541524    1395 out.go:309] Setting ErrFile to fd 2...
	I0911 03:33:27.541527    1395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:27.541656    1395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	W0911 03:33:27.541728    1395 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17225-951/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17225-951/.minikube/config/config.json: no such file or directory
	I0911 03:33:27.542878    1395 out.go:303] Setting JSON to true
	I0911 03:33:27.559195    1395 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":181,"bootTime":1694428226,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 03:33:27.559253    1395 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:33:27.567860    1395 out.go:97] [download-only-074000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	W0911 03:33:27.568001    1395 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball: no such file or directory
	I0911 03:33:27.571797    1395 out.go:169] MINIKUBE_LOCATION=17225
	I0911 03:33:27.568022    1395 notify.go:220] Checking for updates...
	I0911 03:33:27.582759    1395 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 03:33:27.585798    1395 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:33:27.588778    1395 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:33:27.591822    1395 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	W0911 03:33:27.595791    1395 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 03:33:27.596014    1395 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:33:27.600809    1395 out.go:97] Using the qemu2 driver based on user configuration
	I0911 03:33:27.600815    1395 start.go:298] selected driver: qemu2
	I0911 03:33:27.600817    1395 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:33:27.600857    1395 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:33:27.604835    1395 out.go:169] Automatically selected the socket_vmnet network
	I0911 03:33:27.610284    1395 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0911 03:33:27.610375    1395 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 03:33:27.610440    1395 cni.go:84] Creating CNI manager for ""
	I0911 03:33:27.610456    1395 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 03:33:27.610462    1395 start_flags.go:321] config:
	{Name:download-only-074000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-074000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:33:27.615973    1395 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:33:27.619818    1395 out.go:97] Downloading VM boot image ...
	I0911 03:33:27.619835    1395 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0911 03:33:33.399441    1395 out.go:97] Starting control plane node download-only-074000 in cluster download-only-074000
	I0911 03:33:33.399465    1395 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:33:33.455507    1395 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:33:33.455595    1395 cache.go:57] Caching tarball of preloaded images
	I0911 03:33:33.455753    1395 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:33:33.461871    1395 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0911 03:33:33.461876    1395 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:33.547024    1395 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:33:39.783965    1395 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:39.784104    1395 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:40.424308    1395 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 03:33:40.424496    1395 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/download-only-074000/config.json ...
	I0911 03:33:40.424514    1395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/download-only-074000/config.json: {Name:mka4b0a642bec3408aafe4290f6afa7a17904e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:33:40.424733    1395 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:33:40.424908    1395 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0911 03:33:40.786850    1395 out.go:169] 
	W0911 03:33:40.791862    1395 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68] Decompressors:map[bz2:0x1400011e588 gz:0x1400011e5e0 tar:0x1400011e590 tar.bz2:0x1400011e5a0 tar.gz:0x1400011e5b0 tar.xz:0x1400011e5c0 tar.zst:0x1400011e5d0 tbz2:0x1400011e5a0 tgz:0x1400011e5b0 txz:0x1400011e5c0 tzst:0x1400011e5d0 xz:0x1400011e5e8 zip:0x1400011e5f0 zst:0x1400011e600] Getters:map[file:0x140005cc5b0 http:0x14000dca640 https:0x14000dca690] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0911 03:33:40.791889    1395 out_reason.go:110] 
	W0911 03:33:40.797780    1395 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:33:40.801791    1395 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-074000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (13.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-444000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-444000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.701069333s)

                                                
                                                
-- stdout --
	* [offline-docker-444000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-444000 in cluster offline-docker-444000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-444000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:33:02.579125    3374 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:33:02.579248    3374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:02.579252    3374 out.go:309] Setting ErrFile to fd 2...
	I0911 04:33:02.579255    3374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:02.579367    3374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:33:02.580327    3374 out.go:303] Setting JSON to false
	I0911 04:33:02.596384    3374 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3756,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:33:02.596457    3374 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:33:02.602593    3374 out.go:177] * [offline-docker-444000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:33:02.609642    3374 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:33:02.613536    3374 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:33:02.609642    3374 notify.go:220] Checking for updates...
	I0911 04:33:02.619504    3374 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:33:02.622394    3374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:33:02.625610    3374 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:33:02.628462    3374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:33:02.631780    3374 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:33:02.635648    3374 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:33:02.642406    3374 start.go:298] selected driver: qemu2
	I0911 04:33:02.642411    3374 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:33:02.642417    3374 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:33:02.644426    3374 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:33:02.647423    3374 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:33:02.650486    3374 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:33:02.650506    3374 cni.go:84] Creating CNI manager for ""
	I0911 04:33:02.650512    3374 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:33:02.650516    3374 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:33:02.650521    3374 start_flags.go:321] config:
	{Name:offline-docker-444000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:33:02.654619    3374 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:33:02.657377    3374 out.go:177] * Starting control plane node offline-docker-444000 in cluster offline-docker-444000
	I0911 04:33:02.665445    3374 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:33:02.665476    3374 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:33:02.665494    3374 cache.go:57] Caching tarball of preloaded images
	I0911 04:33:02.665564    3374 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:33:02.665570    3374 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:33:02.665759    3374 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/offline-docker-444000/config.json ...
	I0911 04:33:02.665771    3374 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/offline-docker-444000/config.json: {Name:mkf61bb20bc8cfedd0b886f4be8e0a3141d59cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:33:02.665989    3374 start.go:365] acquiring machines lock for offline-docker-444000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:02.666028    3374 start.go:369] acquired machines lock for "offline-docker-444000" in 29µs
	I0911 04:33:02.666039    3374 start.go:93] Provisioning new machine with config: &{Name:offline-docker-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:02.666083    3374 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:02.670463    3374 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:02.684410    3374 start.go:159] libmachine.API.Create for "offline-docker-444000" (driver="qemu2")
	I0911 04:33:02.684433    3374 client.go:168] LocalClient.Create starting
	I0911 04:33:02.684513    3374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:02.684536    3374 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:02.684547    3374 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:02.684757    3374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:02.685375    3374 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:02.685418    3374 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:02.685932    3374 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:02.824798    3374 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:02.861036    3374 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:02.861048    3374 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:02.861190    3374 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2
	I0911 04:33:02.869782    3374 main.go:141] libmachine: STDOUT: 
	I0911 04:33:02.869799    3374 main.go:141] libmachine: STDERR: 
	I0911 04:33:02.869861    3374 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2 +20000M
	I0911 04:33:02.877916    3374 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:02.877930    3374 main.go:141] libmachine: STDERR: 
	I0911 04:33:02.877957    3374 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2
	I0911 04:33:02.877966    3374 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:02.878006    3374 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:57:7b:a1:39:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2
	I0911 04:33:02.879716    3374 main.go:141] libmachine: STDOUT: 
	I0911 04:33:02.879728    3374 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:02.879755    3374 client.go:171] LocalClient.Create took 195.312625ms
	I0911 04:33:04.881799    3374 start.go:128] duration metric: createHost completed in 2.215713458s
	I0911 04:33:04.881812    3374 start.go:83] releasing machines lock for "offline-docker-444000", held for 2.21578175s
	W0911 04:33:04.881823    3374 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:04.886336    3374 out.go:177] * Deleting "offline-docker-444000" in qemu2 ...
	W0911 04:33:04.897152    3374 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:04.897163    3374 start.go:687] Will try again in 5 seconds ...
	I0911 04:33:09.899257    3374 start.go:365] acquiring machines lock for offline-docker-444000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:09.899376    3374 start.go:369] acquired machines lock for "offline-docker-444000" in 86.458µs
	I0911 04:33:09.899405    3374 start.go:93] Provisioning new machine with config: &{Name:offline-docker-444000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-444000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:09.899469    3374 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:09.908694    3374 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:09.922778    3374 start.go:159] libmachine.API.Create for "offline-docker-444000" (driver="qemu2")
	I0911 04:33:09.922799    3374 client.go:168] LocalClient.Create starting
	I0911 04:33:09.922868    3374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:09.922903    3374 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:09.922916    3374 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:09.922955    3374 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:09.922974    3374 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:09.922982    3374 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:09.923255    3374 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:10.046502    3374 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:10.192960    3374 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:10.192974    3374 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:10.193167    3374 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2
	I0911 04:33:10.202068    3374 main.go:141] libmachine: STDOUT: 
	I0911 04:33:10.202089    3374 main.go:141] libmachine: STDERR: 
	I0911 04:33:10.202168    3374 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2 +20000M
	I0911 04:33:10.210016    3374 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:10.210040    3374 main.go:141] libmachine: STDERR: 
	I0911 04:33:10.210056    3374 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2
	I0911 04:33:10.210062    3374 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:10.210114    3374 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:b0:53:9e:25:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/offline-docker-444000/disk.qcow2
	I0911 04:33:10.211868    3374 main.go:141] libmachine: STDOUT: 
	I0911 04:33:10.211881    3374 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:10.211904    3374 client.go:171] LocalClient.Create took 289.101584ms
	I0911 04:33:12.214175    3374 start.go:128] duration metric: createHost completed in 2.314596s
	I0911 04:33:12.214257    3374 start.go:83] releasing machines lock for "offline-docker-444000", held for 2.314873458s
	W0911 04:33:12.214650    3374 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-444000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:12.224074    3374 out.go:177] 
	W0911 04:33:12.228058    3374 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:33:12.228098    3374 out.go:239] * 
	* 
	W0911 04:33:12.230925    3374 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:33:12.240046    3374 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-444000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-09-11 04:33:12.254216 -0700 PDT m=+3584.801578751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-444000 -n offline-docker-444000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-444000 -n offline-docker-444000: exit status 7 (66.438625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-444000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-444000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-444000
--- FAIL: TestOffline (9.87s)

                                                
                                    
x
+
TestAddons/Setup (42.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-211000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-211000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (42.927745292s)

                                                
                                                
-- stdout --
	* [addons-211000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-211000 in cluster addons-211000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying ingress addon...
	
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying registry addon...
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 03:33:53.297046    1463 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:33:53.297173    1463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:53.297176    1463 out.go:309] Setting ErrFile to fd 2...
	I0911 03:33:53.297178    1463 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:53.297288    1463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 03:33:53.298244    1463 out.go:303] Setting JSON to false
	I0911 03:33:53.313318    1463 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":207,"bootTime":1694428226,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 03:33:53.313375    1463 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:33:53.318622    1463 out.go:177] * [addons-211000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:33:53.325648    1463 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 03:33:53.325678    1463 notify.go:220] Checking for updates...
	I0911 03:33:53.332612    1463 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 03:33:53.335622    1463 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:33:53.338627    1463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:33:53.341598    1463 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 03:33:53.344585    1463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:33:53.347772    1463 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:33:53.350596    1463 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 03:33:53.357564    1463 start.go:298] selected driver: qemu2
	I0911 03:33:53.357570    1463 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:33:53.357575    1463 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:33:53.359442    1463 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:33:53.360861    1463 out.go:177] * Automatically selected the socket_vmnet network
	I0911 03:33:53.363717    1463 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 03:33:53.363747    1463 cni.go:84] Creating CNI manager for ""
	I0911 03:33:53.363756    1463 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:33:53.363760    1463 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 03:33:53.363765    1463 start_flags.go:321] config:
	{Name:addons-211000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0911 03:33:53.367772    1463 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:33:53.374575    1463 out.go:177] * Starting control plane node addons-211000 in cluster addons-211000
	I0911 03:33:53.378607    1463 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:33:53.378632    1463 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:33:53.378645    1463 cache.go:57] Caching tarball of preloaded images
	I0911 03:33:53.378709    1463 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 03:33:53.378715    1463 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 03:33:53.378884    1463 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/config.json ...
	I0911 03:33:53.378895    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/config.json: {Name:mke9a81e12081a57a17b2e57397f1c1cdd1b2abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:33:53.379118    1463 start.go:365] acquiring machines lock for addons-211000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 03:33:53.379219    1463 start.go:369] acquired machines lock for "addons-211000" in 95.791µs
	I0911 03:33:53.379229    1463 start.go:93] Provisioning new machine with config: &{Name:addons-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:33:53.379262    1463 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 03:33:53.387616    1463 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0911 03:33:53.687711    1463 start.go:159] libmachine.API.Create for "addons-211000" (driver="qemu2")
	I0911 03:33:53.687757    1463 client.go:168] LocalClient.Create starting
	I0911 03:33:53.687929    1463 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 03:33:53.737149    1463 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 03:33:53.797439    1463 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 03:33:54.270395    1463 main.go:141] libmachine: Creating SSH key...
	I0911 03:33:54.369506    1463 main.go:141] libmachine: Creating Disk image...
	I0911 03:33:54.369512    1463 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 03:33:54.369699    1463 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/disk.qcow2
	I0911 03:33:54.414691    1463 main.go:141] libmachine: STDOUT: 
	I0911 03:33:54.414710    1463 main.go:141] libmachine: STDERR: 
	I0911 03:33:54.414784    1463 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/disk.qcow2 +20000M
	I0911 03:33:54.422185    1463 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 03:33:54.422197    1463 main.go:141] libmachine: STDERR: 
	I0911 03:33:54.422214    1463 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/disk.qcow2
	I0911 03:33:54.422221    1463 main.go:141] libmachine: Starting QEMU VM...
	I0911 03:33:54.422263    1463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:70:fc:4b:8b:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/disk.qcow2
	I0911 03:33:54.490302    1463 main.go:141] libmachine: STDOUT: 
	I0911 03:33:54.490327    1463 main.go:141] libmachine: STDERR: 
	I0911 03:33:54.490332    1463 main.go:141] libmachine: Attempt 0
	I0911 03:33:54.490344    1463 main.go:141] libmachine: Searching for f2:70:fc:4b:8b:fb in /var/db/dhcpd_leases ...
	I0911 03:33:56.491502    1463 main.go:141] libmachine: Attempt 1
	I0911 03:33:56.491607    1463 main.go:141] libmachine: Searching for f2:70:fc:4b:8b:fb in /var/db/dhcpd_leases ...
	I0911 03:33:58.492784    1463 main.go:141] libmachine: Attempt 2
	I0911 03:33:58.492806    1463 main.go:141] libmachine: Searching for f2:70:fc:4b:8b:fb in /var/db/dhcpd_leases ...
	I0911 03:34:00.494033    1463 main.go:141] libmachine: Attempt 3
	I0911 03:34:00.494073    1463 main.go:141] libmachine: Searching for f2:70:fc:4b:8b:fb in /var/db/dhcpd_leases ...
	I0911 03:34:02.495138    1463 main.go:141] libmachine: Attempt 4
	I0911 03:34:02.495158    1463 main.go:141] libmachine: Searching for f2:70:fc:4b:8b:fb in /var/db/dhcpd_leases ...
	I0911 03:34:04.496241    1463 main.go:141] libmachine: Attempt 5
	I0911 03:34:04.496270    1463 main.go:141] libmachine: Searching for f2:70:fc:4b:8b:fb in /var/db/dhcpd_leases ...
	I0911 03:34:06.497333    1463 main.go:141] libmachine: Attempt 6
	I0911 03:34:06.497364    1463 main.go:141] libmachine: Searching for f2:70:fc:4b:8b:fb in /var/db/dhcpd_leases ...
	I0911 03:34:06.497491    1463 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0911 03:34:06.497527    1463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 03:34:06.497534    1463 main.go:141] libmachine: Found match: f2:70:fc:4b:8b:fb
	I0911 03:34:06.497560    1463 main.go:141] libmachine: IP: 192.168.105.2
	I0911 03:34:06.497569    1463 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0911 03:34:08.517808    1463 machine.go:88] provisioning docker machine ...
	I0911 03:34:08.517881    1463 buildroot.go:166] provisioning hostname "addons-211000"
	I0911 03:34:08.518691    1463 main.go:141] libmachine: Using SSH client type: native
	I0911 03:34:08.519457    1463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085e3b0] 0x100860e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:34:08.519481    1463 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-211000 && echo "addons-211000" | sudo tee /etc/hostname
	I0911 03:34:08.600486    1463 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-211000
	
	I0911 03:34:08.600592    1463 main.go:141] libmachine: Using SSH client type: native
	I0911 03:34:08.601026    1463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085e3b0] 0x100860e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:34:08.601041    1463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-211000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-211000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-211000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 03:34:08.663168    1463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:34:08.663192    1463 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17225-951/.minikube CaCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17225-951/.minikube}
	I0911 03:34:08.663216    1463 buildroot.go:174] setting up certificates
	I0911 03:34:08.663222    1463 provision.go:83] configureAuth start
	I0911 03:34:08.663228    1463 provision.go:138] copyHostCerts
	I0911 03:34:08.663375    1463 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem (1078 bytes)
	I0911 03:34:08.663636    1463 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem (1123 bytes)
	I0911 03:34:08.663780    1463 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem (1675 bytes)
	I0911 03:34:08.663882    1463 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem org=jenkins.addons-211000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-211000]
	I0911 03:34:08.709724    1463 provision.go:172] copyRemoteCerts
	I0911 03:34:08.709786    1463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 03:34:08.709798    1463 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/id_rsa Username:docker}
	I0911 03:34:08.737212    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 03:34:08.743782    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 03:34:08.750801    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 03:34:08.758019    1463 provision.go:86] duration metric: configureAuth took 94.78975ms
	I0911 03:34:08.758026    1463 buildroot.go:189] setting minikube options for container-runtime
	I0911 03:34:08.758126    1463 config.go:182] Loaded profile config "addons-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:34:08.758159    1463 main.go:141] libmachine: Using SSH client type: native
	I0911 03:34:08.758376    1463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085e3b0] 0x100860e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:34:08.758380    1463 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 03:34:08.808743    1463 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 03:34:08.808766    1463 buildroot.go:70] root file system type: tmpfs
	I0911 03:34:08.808828    1463 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 03:34:08.808872    1463 main.go:141] libmachine: Using SSH client type: native
	I0911 03:34:08.809097    1463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085e3b0] 0x100860e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:34:08.809130    1463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 03:34:08.862623    1463 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 03:34:08.862671    1463 main.go:141] libmachine: Using SSH client type: native
	I0911 03:34:08.862906    1463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085e3b0] 0x100860e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:34:08.862914    1463 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 03:34:09.198404    1463 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0911 03:34:09.198418    1463 machine.go:91] provisioned docker machine in 680.586708ms
	I0911 03:34:09.198423    1463 client.go:171] LocalClient.Create took 15.510912s
	I0911 03:34:09.198439    1463 start.go:167] duration metric: libmachine.API.Create for "addons-211000" took 15.51098925s
	I0911 03:34:09.198445    1463 start.go:300] post-start starting for "addons-211000" (driver="qemu2")
	I0911 03:34:09.198450    1463 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 03:34:09.198520    1463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 03:34:09.198530    1463 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/id_rsa Username:docker}
	I0911 03:34:09.225861    1463 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 03:34:09.227203    1463 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 03:34:09.227215    1463 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/addons for local assets ...
	I0911 03:34:09.227284    1463 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/files for local assets ...
	I0911 03:34:09.227309    1463 start.go:303] post-start completed in 28.861416ms
	I0911 03:34:09.227647    1463 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/config.json ...
	I0911 03:34:09.227783    1463 start.go:128] duration metric: createHost completed in 15.848774291s
	I0911 03:34:09.227809    1463 main.go:141] libmachine: Using SSH client type: native
	I0911 03:34:09.228020    1463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10085e3b0] 0x100860e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:34:09.228025    1463 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0911 03:34:09.278911    1463 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694428449.385296293
	
	I0911 03:34:09.278918    1463 fix.go:206] guest clock: 1694428449.385296293
	I0911 03:34:09.278922    1463 fix.go:219] Guest: 2023-09-11 03:34:09.385296293 -0700 PDT Remote: 2023-09-11 03:34:09.227788 -0700 PDT m=+15.949954210 (delta=157.508293ms)
	I0911 03:34:09.278937    1463 fix.go:190] guest clock delta is within tolerance: 157.508293ms
	I0911 03:34:09.278940    1463 start.go:83] releasing machines lock for "addons-211000", held for 15.899972708s
	I0911 03:34:09.279226    1463 ssh_runner.go:195] Run: cat /version.json
	I0911 03:34:09.279236    1463 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/id_rsa Username:docker}
	I0911 03:34:09.279241    1463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 03:34:09.279287    1463 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/id_rsa Username:docker}
	I0911 03:34:09.347676    1463 ssh_runner.go:195] Run: systemctl --version
	I0911 03:34:09.349786    1463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 03:34:09.351666    1463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 03:34:09.351695    1463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 03:34:09.356862    1463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 03:34:09.356868    1463 start.go:466] detecting cgroup driver to use...
	I0911 03:34:09.356982    1463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:34:09.364214    1463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0911 03:34:09.368910    1463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 03:34:09.372171    1463 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 03:34:09.372219    1463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 03:34:09.375348    1463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:34:09.380273    1463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 03:34:09.384127    1463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:34:09.387247    1463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 03:34:09.390902    1463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 03:34:09.395246    1463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 03:34:09.398582    1463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 03:34:09.401650    1463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:34:09.478645    1463 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 03:34:09.487159    1463 start.go:466] detecting cgroup driver to use...
	I0911 03:34:09.487237    1463 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 03:34:09.493246    1463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:34:09.498194    1463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 03:34:09.505733    1463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:34:09.509958    1463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:34:09.514858    1463 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0911 03:34:09.557930    1463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:34:09.563273    1463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:34:09.568721    1463 ssh_runner.go:195] Run: which cri-dockerd
	I0911 03:34:09.569997    1463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 03:34:09.572671    1463 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 03:34:09.577631    1463 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 03:34:09.641222    1463 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 03:34:09.700675    1463 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 03:34:09.700688    1463 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 03:34:09.706065    1463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:34:09.760965    1463 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:34:10.915098    1463 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154135625s)
	I0911 03:34:10.915153    1463 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:34:10.976892    1463 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0911 03:34:11.037168    1463 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:34:11.101561    1463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:34:11.161991    1463 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0911 03:34:11.168954    1463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:34:11.230023    1463 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0911 03:34:11.253319    1463 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0911 03:34:11.253418    1463 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0911 03:34:11.255320    1463 start.go:534] Will wait 60s for crictl version
	I0911 03:34:11.255354    1463 ssh_runner.go:195] Run: which crictl
	I0911 03:34:11.257929    1463 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 03:34:11.272786    1463 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0911 03:34:11.272856    1463 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:34:11.282845    1463 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:34:11.301262    1463 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0911 03:34:11.301342    1463 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 03:34:11.302872    1463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:34:11.306889    1463 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:34:11.306937    1463 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:34:11.312045    1463 docker.go:636] Got preloaded images: 
	I0911 03:34:11.312054    1463 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0911 03:34:11.312094    1463 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:34:11.315074    1463 ssh_runner.go:195] Run: which lz4
	I0911 03:34:11.316581    1463 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0911 03:34:11.317951    1463 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 03:34:11.317966    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0911 03:34:12.658696    1463 docker.go:600] Took 1.342181 seconds to copy over tarball
	I0911 03:34:12.658769    1463 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 03:34:13.695579    1463 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.036812583s)
	I0911 03:34:13.695594    1463 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 03:34:13.711310    1463 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:34:13.714759    1463 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0911 03:34:13.719820    1463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:34:13.780659    1463 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:34:16.082609    1463 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.30197175s)
	I0911 03:34:16.082703    1463 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:34:16.088687    1463 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0911 03:34:16.088698    1463 cache_images.go:84] Images are preloaded, skipping loading
	I0911 03:34:16.088767    1463 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 03:34:16.099147    1463 cni.go:84] Creating CNI manager for ""
	I0911 03:34:16.099157    1463 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:34:16.099178    1463 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 03:34:16.099197    1463 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-211000 NodeName:addons-211000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 03:34:16.099267    1463 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-211000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 03:34:16.099312    1463 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-211000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 03:34:16.099357    1463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 03:34:16.102746    1463 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 03:34:16.102780    1463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 03:34:16.105735    1463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0911 03:34:16.110553    1463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 03:34:16.115481    1463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0911 03:34:16.120570    1463 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0911 03:34:16.121823    1463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:34:16.125846    1463 certs.go:56] Setting up /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000 for IP: 192.168.105.2
	I0911 03:34:16.125857    1463 certs.go:190] acquiring lock for shared ca certs: {Name:mkb829580b94fbef660a72f5d00b6f296afd6da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.126007    1463 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key
	I0911 03:34:16.165254    1463 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt ...
	I0911 03:34:16.165258    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt: {Name:mka30e41b5eca8d0680636c4f609e7317a88c8a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.165453    1463 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key ...
	I0911 03:34:16.165456    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key: {Name:mkc618cb1e59702dad5e000c367e8302ad1b5278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.165570    1463 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key
	I0911 03:34:16.374645    1463 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.crt ...
	I0911 03:34:16.374649    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.crt: {Name:mkef2fe60255080ff6c90ef82921d9569436b486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.374828    1463 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key ...
	I0911 03:34:16.374831    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key: {Name:mkc38b60810939efcf1aa416bb3748317e088bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.374960    1463 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/client.key
	I0911 03:34:16.374966    1463 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/client.crt with IP's: []
	I0911 03:34:16.529105    1463 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/client.crt ...
	I0911 03:34:16.529117    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/client.crt: {Name:mk37de878910e10d20249783469b5920af3c8844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.529380    1463 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/client.key ...
	I0911 03:34:16.529383    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/client.key: {Name:mk902a8ec4845f3119ebed1a06d6e11a69b88d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.529490    1463 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.key.96055969
	I0911 03:34:16.529500    1463 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 03:34:16.696573    1463 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.crt.96055969 ...
	I0911 03:34:16.696589    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.crt.96055969: {Name:mkc4c820f1db704bdff09ab1fae770979176b51f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.696801    1463 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.key.96055969 ...
	I0911 03:34:16.696804    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.key.96055969: {Name:mk2255574e5ceb198a4b049489c42b2970e33ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.696922    1463 certs.go:337] copying /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.crt
	I0911 03:34:16.697016    1463 certs.go:341] copying /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.key
	I0911 03:34:16.697121    1463 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.key
	I0911 03:34:16.697149    1463 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.crt with IP's: []
	I0911 03:34:16.769353    1463 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.crt ...
	I0911 03:34:16.769357    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.crt: {Name:mkb6054226bc3ede1b9db5bd678d73e8e53d09ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.769493    1463 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.key ...
	I0911 03:34:16.769496    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.key: {Name:mk49e5e5eda2599180c0326816f0345c7b1517a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:16.769736    1463 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 03:34:16.769764    1463 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem (1078 bytes)
	I0911 03:34:16.769785    1463 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem (1123 bytes)
	I0911 03:34:16.769805    1463 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem (1675 bytes)
	I0911 03:34:16.770163    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 03:34:16.778565    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 03:34:16.786041    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 03:34:16.793572    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/addons-211000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 03:34:16.800601    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 03:34:16.807142    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 03:34:16.814444    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 03:34:16.821644    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 03:34:16.828676    1463 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 03:34:16.835505    1463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 03:34:16.841357    1463 ssh_runner.go:195] Run: openssl version
	I0911 03:34:16.843373    1463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 03:34:16.846897    1463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:34:16.848696    1463 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:34:16.848715    1463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:34:16.850563    1463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 03:34:16.854051    1463 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 03:34:16.855609    1463 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 03:34:16.855647    1463 kubeadm.go:404] StartCluster: {Name:addons-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-211000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:34:16.855707    1463 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:34:16.861444    1463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 03:34:16.864723    1463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 03:34:16.867417    1463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 03:34:16.870708    1463 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 03:34:16.870723    1463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 03:34:16.892803    1463 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 03:34:16.892833    1463 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 03:34:16.947551    1463 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 03:34:16.947601    1463 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 03:34:16.947645    1463 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 03:34:17.004082    1463 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 03:34:17.009266    1463 out.go:204]   - Generating certificates and keys ...
	I0911 03:34:17.012051    1463 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 03:34:17.012080    1463 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 03:34:17.106070    1463 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 03:34:17.198172    1463 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 03:34:17.307080    1463 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 03:34:17.491226    1463 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 03:34:17.570158    1463 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 03:34:17.570212    1463 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-211000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0911 03:34:17.639750    1463 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 03:34:17.639831    1463 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-211000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0911 03:34:17.768950    1463 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 03:34:17.836350    1463 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 03:34:17.935401    1463 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 03:34:17.935439    1463 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 03:34:17.995378    1463 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 03:34:18.195784    1463 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 03:34:18.235341    1463 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 03:34:18.395239    1463 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 03:34:18.395416    1463 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 03:34:18.397330    1463 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 03:34:18.400677    1463 out.go:204]   - Booting up control plane ...
	I0911 03:34:18.400754    1463 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 03:34:18.400809    1463 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 03:34:18.400854    1463 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 03:34:18.405067    1463 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 03:34:18.405122    1463 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 03:34:18.405141    1463 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 03:34:18.476047    1463 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 03:34:21.980453    1463 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.504497 seconds
	I0911 03:34:21.980508    1463 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 03:34:21.987470    1463 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 03:34:22.497949    1463 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 03:34:22.498067    1463 kubeadm.go:322] [mark-control-plane] Marking the node addons-211000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 03:34:23.005671    1463 kubeadm.go:322] [bootstrap-token] Using token: sqhsy3.6a2jynnmtcs9we22
	I0911 03:34:23.015075    1463 out.go:204]   - Configuring RBAC rules ...
	I0911 03:34:23.015141    1463 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 03:34:23.015213    1463 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 03:34:23.017560    1463 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 03:34:23.019003    1463 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 03:34:23.020062    1463 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 03:34:23.021370    1463 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 03:34:23.025633    1463 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 03:34:23.180698    1463 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 03:34:23.411932    1463 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 03:34:23.412281    1463 kubeadm.go:322] 
	I0911 03:34:23.412309    1463 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 03:34:23.412315    1463 kubeadm.go:322] 
	I0911 03:34:23.412356    1463 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 03:34:23.412362    1463 kubeadm.go:322] 
	I0911 03:34:23.412376    1463 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 03:34:23.412404    1463 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 03:34:23.412434    1463 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 03:34:23.412439    1463 kubeadm.go:322] 
	I0911 03:34:23.412464    1463 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 03:34:23.412466    1463 kubeadm.go:322] 
	I0911 03:34:23.412491    1463 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 03:34:23.412497    1463 kubeadm.go:322] 
	I0911 03:34:23.412525    1463 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 03:34:23.412559    1463 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 03:34:23.412594    1463 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 03:34:23.412597    1463 kubeadm.go:322] 
	I0911 03:34:23.412671    1463 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 03:34:23.412734    1463 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 03:34:23.412737    1463 kubeadm.go:322] 
	I0911 03:34:23.412776    1463 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sqhsy3.6a2jynnmtcs9we22 \
	I0911 03:34:23.412839    1463 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fefaa3365accf94cefbb66337f5f2e8a6ced437eccd2cdfbf367c2be71bce2cf \
	I0911 03:34:23.412850    1463 kubeadm.go:322] 	--control-plane 
	I0911 03:34:23.412853    1463 kubeadm.go:322] 
	I0911 03:34:23.412890    1463 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 03:34:23.412894    1463 kubeadm.go:322] 
	I0911 03:34:23.412930    1463 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sqhsy3.6a2jynnmtcs9we22 \
	I0911 03:34:23.412977    1463 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fefaa3365accf94cefbb66337f5f2e8a6ced437eccd2cdfbf367c2be71bce2cf 
	I0911 03:34:23.413041    1463 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 03:34:23.413051    1463 cni.go:84] Creating CNI manager for ""
	I0911 03:34:23.413060    1463 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:34:23.420209    1463 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 03:34:23.423189    1463 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 03:34:23.426300    1463 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 03:34:23.431046    1463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 03:34:23.431105    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:23.431111    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=c0ed13cc972769b226a536a2831a80a40376f282 minikube.k8s.io/name=addons-211000 minikube.k8s.io/updated_at=2023_09_11T03_34_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:23.501014    1463 ops.go:34] apiserver oom_adj: -16
	I0911 03:34:23.501055    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:23.533067    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:24.069662    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:24.569620    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:25.069639    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:25.569579    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:26.068788    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:26.569612    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:27.069588    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:27.569570    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:28.069575    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:28.569570    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:29.069528    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:29.569550    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:30.069535    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:30.569527    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:31.069552    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:31.569548    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:32.069552    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:32.569534    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:33.069458    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:33.568677    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:34.069500    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:34.569482    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:35.069447    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:35.569447    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:36.069418    1463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:34:36.107712    1463 kubeadm.go:1081] duration metric: took 12.676843959s to wait for elevateKubeSystemPrivileges.
	I0911 03:34:36.107727    1463 kubeadm.go:406] StartCluster complete in 19.252392292s
	I0911 03:34:36.107736    1463 settings.go:142] acquiring lock: {Name:mkc25efdeb235bb06c8f15f7bc2dab1fff3cf449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:36.107893    1463 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 03:34:36.108061    1463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/kubeconfig: {Name:mk9102949afcf8989652bad8d36d55e289cc75c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:34:36.108264    1463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 03:34:36.108313    1463 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0911 03:34:36.108382    1463 addons.go:69] Setting volumesnapshots=true in profile "addons-211000"
	I0911 03:34:36.108389    1463 addons.go:231] Setting addon volumesnapshots=true in "addons-211000"
	I0911 03:34:36.108390    1463 addons.go:69] Setting default-storageclass=true in profile "addons-211000"
	I0911 03:34:36.108396    1463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-211000"
	I0911 03:34:36.108419    1463 config.go:182] Loaded profile config "addons-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:34:36.108425    1463 host.go:66] Checking if "addons-211000" exists ...
	I0911 03:34:36.108447    1463 addons.go:69] Setting ingress-dns=true in profile "addons-211000"
	I0911 03:34:36.108451    1463 addons.go:231] Setting addon ingress-dns=true in "addons-211000"
	I0911 03:34:36.108468    1463 host.go:66] Checking if "addons-211000" exists ...
	I0911 03:34:36.108454    1463 addons.go:69] Setting metrics-server=true in profile "addons-211000"
	I0911 03:34:36.108472    1463 addons.go:69] Setting gcp-auth=true in profile "addons-211000"
	I0911 03:34:36.108496    1463 addons.go:231] Setting addon metrics-server=true in "addons-211000"
	I0911 03:34:36.108433    1463 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-211000"
	I0911 03:34:36.108532    1463 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-211000"
	I0911 03:34:36.108545    1463 host.go:66] Checking if "addons-211000" exists ...
	I0911 03:34:36.108531    1463 addons.go:69] Setting inspektor-gadget=true in profile "addons-211000"
	I0911 03:34:36.108549    1463 mustload.go:65] Loading cluster: addons-211000
	I0911 03:34:36.108570    1463 addons.go:69] Setting registry=true in profile "addons-211000"
	I0911 03:34:36.108580    1463 addons.go:231] Setting addon registry=true in "addons-211000"
	I0911 03:34:36.108606    1463 host.go:66] Checking if "addons-211000" exists ...
	I0911 03:34:36.108610    1463 addons.go:69] Setting cloud-spanner=true in profile "addons-211000"
	I0911 03:34:36.108615    1463 addons.go:231] Setting addon cloud-spanner=true in "addons-211000"
	I0911 03:34:36.108622    1463 host.go:66] Checking if "addons-211000" exists ...
	I0911 03:34:36.108631    1463 host.go:66] Checking if "addons-211000" exists ...
	W0911 03:34:36.108692    1463 host.go:54] host status for "addons-211000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	W0911 03:34:36.108700    1463 addons.go:277] "addons-211000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0911 03:34:36.108382    1463 addons.go:69] Setting ingress=true in profile "addons-211000"
	I0911 03:34:36.108784    1463 addons.go:231] Setting addon ingress=true in "addons-211000"
	I0911 03:34:36.108795    1463 host.go:66] Checking if "addons-211000" exists ...
	W0911 03:34:36.108851    1463 host.go:54] host status for "addons-211000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	W0911 03:34:36.108857    1463 addons.go:277] "addons-211000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0911 03:34:36.108891    1463 host.go:54] host status for "addons-211000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	W0911 03:34:36.108898    1463 addons.go:277] "addons-211000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0911 03:34:36.108558    1463 addons.go:231] Setting addon inspektor-gadget=true in "addons-211000"
	I0911 03:34:36.108915    1463 config.go:182] Loaded profile config "addons-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:34:36.108933    1463 host.go:66] Checking if "addons-211000" exists ...
	W0911 03:34:36.108992    1463 host.go:54] host status for "addons-211000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	W0911 03:34:36.109000    1463 addons.go:277] "addons-211000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0911 03:34:36.109003    1463 addons.go:467] Verifying addon ingress=true in "addons-211000"
	I0911 03:34:36.108607    1463 addons.go:69] Setting storage-provisioner=true in profile "addons-211000"
	I0911 03:34:36.113449    1463 out.go:177] * Verifying ingress addon...
	I0911 03:34:36.109026    1463 addons.go:231] Setting addon storage-provisioner=true in "addons-211000"
	W0911 03:34:36.109087    1463 host.go:54] host status for "addons-211000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	W0911 03:34:36.109194    1463 host.go:54] host status for "addons-211000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	W0911 03:34:36.109342    1463 host.go:54] host status for "addons-211000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	I0911 03:34:36.118161    1463 addons.go:231] Setting addon default-storageclass=true in "addons-211000"
	W0911 03:34:36.123400    1463 addons.go:277] "addons-211000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0911 03:34:36.123443    1463 host.go:66] Checking if "addons-211000" exists ...
	I0911 03:34:36.123812    1463 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0911 03:34:36.127381    1463 addons.go:277] "addons-211000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	I0911 03:34:36.127388    1463 out.go:177] 
	W0911 03:34:36.127394    1463 addons.go:277] "addons-211000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0911 03:34:36.127512    1463 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-211000" context rescaled to 1 replicas
	I0911 03:34:36.131391    1463 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:34:36.134360    1463 out.go:177] * Verifying Kubernetes components...
	I0911 03:34:36.131442    1463 addons.go:467] Verifying addon metrics-server=true in "addons-211000"
	I0911 03:34:36.131493    1463 host.go:66] Checking if "addons-211000" exists ...
	I0911 03:34:36.131496    1463 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0911 03:34:36.131503    1463 addons.go:467] Verifying addon registry=true in "addons-211000"
	I0911 03:34:36.133796    1463 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0911 03:34:36.134980    1463 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	W0911 03:34:36.142466    1463 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	I0911 03:34:36.146421    1463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:34:36.154401    1463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 03:34:36.157386    1463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/monitor: connect: connection refused
	I0911 03:34:36.157549    1463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 03:34:36.161370    1463 out.go:177] * Verifying registry addon...
	W0911 03:34:36.161378    1463 out.go:239] * 
	I0911 03:34:36.161472    1463 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/id_rsa Username:docker}
	I0911 03:34:36.167379    1463 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	* 
	I0911 03:34:36.167509    1463 node_ready.go:35] waiting up to 6m0s for node "addons-211000" to be "Ready" ...
	I0911 03:34:36.179552    1463 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	W0911 03:34:36.179969    1463 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:34:36.181003    1463 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0911 03:34:36.182093    1463 node_ready.go:49] node "addons-211000" has status "Ready":"True"
	I0911 03:34:36.188397    1463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 03:34:36.188405    1463 node_ready.go:38] duration metric: took 7.756583ms waiting for node "addons-211000" to be "Ready" ...
	I0911 03:34:36.188406    1463 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/addons-211000/id_rsa Username:docker}
	I0911 03:34:36.188410    1463 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:34:36.192348    1463 out.go:177] 
	I0911 03:34:36.190537    1463 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0911 03:34:36.191580    1463 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-211000" in "kube-system" namespace to be "Ready" ...

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-211000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (42.93s)

                                                
                                    
x
+
TestCertOptions (9.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-952000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-952000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.644912459s)

                                                
                                                
-- stdout --
	* [cert-options-952000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-952000 in cluster cert-options-952000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-952000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-952000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-952000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-952000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-952000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (80.25325ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-952000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-952000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-952000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-952000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-952000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.680291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-952000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-952000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-952000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-11 04:33:42.639592 -0700 PDT m=+3615.186984251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-952000 -n cert-options-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-952000 -n cert-options-952000: exit status 7 (28.552125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-952000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-952000
--- FAIL: TestCertOptions (9.92s)
E0911 04:34:32.394646    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:35:55.464822    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-813000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-813000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.800733625s)

                                                
                                                
-- stdout --
	* [cert-expiration-813000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-813000 in cluster cert-expiration-813000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-813000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-813000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-813000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-813000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.226374792s)

                                                
                                                
-- stdout --
	* [cert-expiration-813000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-813000 in cluster cert-expiration-813000
	* Restarting existing qemu2 VM for "cert-expiration-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-813000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-813000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-813000 in cluster cert-expiration-813000
	* Restarting existing qemu2 VM for "cert-expiration-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-11 04:36:42.849427 -0700 PDT m=+3795.396995209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-813000 -n cert-expiration-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-813000 -n cert-expiration-813000: exit status 7 (67.67375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-813000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-813000
--- FAIL: TestCertExpiration (195.20s)

                                                
                                    
x
+
TestDockerFlags (10.06s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-949000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-949000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.816515834s)

                                                
                                                
-- stdout --
	* [docker-flags-949000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-949000 in cluster docker-flags-949000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-949000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:33:22.804123    3572 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:33:22.804259    3572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:22.804265    3572 out.go:309] Setting ErrFile to fd 2...
	I0911 04:33:22.804267    3572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:22.804384    3572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:33:22.805384    3572 out.go:303] Setting JSON to false
	I0911 04:33:22.820550    3572 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3776,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:33:22.820628    3572 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:33:22.824147    3572 out.go:177] * [docker-flags-949000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:33:22.831913    3572 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:33:22.836056    3572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:33:22.831973    3572 notify.go:220] Checking for updates...
	I0911 04:33:22.840448    3572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:33:22.843047    3572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:33:22.846111    3572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:33:22.849107    3572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:33:22.852406    3572 config.go:182] Loaded profile config "force-systemd-flag-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:33:22.852449    3572 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:33:22.857070    3572 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:33:22.864070    3572 start.go:298] selected driver: qemu2
	I0911 04:33:22.864075    3572 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:33:22.864082    3572 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:33:22.866062    3572 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:33:22.869116    3572 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:33:22.872215    3572 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0911 04:33:22.872241    3572 cni.go:84] Creating CNI manager for ""
	I0911 04:33:22.872249    3572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:33:22.872253    3572 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:33:22.872261    3572 start_flags.go:321] config:
	{Name:docker-flags-949000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-949000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:33:22.876430    3572 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:33:22.884042    3572 out.go:177] * Starting control plane node docker-flags-949000 in cluster docker-flags-949000
	I0911 04:33:22.887009    3572 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:33:22.887043    3572 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:33:22.887064    3572 cache.go:57] Caching tarball of preloaded images
	I0911 04:33:22.887142    3572 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:33:22.887147    3572 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:33:22.887219    3572 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/docker-flags-949000/config.json ...
	I0911 04:33:22.887233    3572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/docker-flags-949000/config.json: {Name:mk84bc12cf456d1806997b16ea400f2d1c5033bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:33:22.887430    3572 start.go:365] acquiring machines lock for docker-flags-949000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:22.887459    3572 start.go:369] acquired machines lock for "docker-flags-949000" in 23.625µs
	I0911 04:33:22.887471    3572 start.go:93] Provisioning new machine with config: &{Name:docker-flags-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-949000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:22.887502    3572 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:22.892114    3572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:22.908207    3572 start.go:159] libmachine.API.Create for "docker-flags-949000" (driver="qemu2")
	I0911 04:33:22.908234    3572 client.go:168] LocalClient.Create starting
	I0911 04:33:22.908288    3572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:22.908318    3572 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:22.908330    3572 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:22.908368    3572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:22.908386    3572 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:22.908395    3572 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:22.908709    3572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:23.049176    3572 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:23.182890    3572 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:23.182901    3572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:23.183075    3572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2
	I0911 04:33:23.191704    3572 main.go:141] libmachine: STDOUT: 
	I0911 04:33:23.191728    3572 main.go:141] libmachine: STDERR: 
	I0911 04:33:23.191796    3572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2 +20000M
	I0911 04:33:23.199018    3572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:23.199035    3572 main.go:141] libmachine: STDERR: 
	I0911 04:33:23.199049    3572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2
	I0911 04:33:23.199058    3572 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:23.199089    3572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:fb:fa:13:43:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2
	I0911 04:33:23.200597    3572 main.go:141] libmachine: STDOUT: 
	I0911 04:33:23.200611    3572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:23.200634    3572 client.go:171] LocalClient.Create took 292.392625ms
	I0911 04:33:25.202800    3572 start.go:128] duration metric: createHost completed in 2.315279s
	I0911 04:33:25.203105    3572 start.go:83] releasing machines lock for "docker-flags-949000", held for 2.315638167s
	W0911 04:33:25.203166    3572 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:25.211538    3572 out.go:177] * Deleting "docker-flags-949000" in qemu2 ...
	W0911 04:33:25.232434    3572 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:25.232466    3572 start.go:687] Will try again in 5 seconds ...
	I0911 04:33:30.234688    3572 start.go:365] acquiring machines lock for docker-flags-949000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:30.236932    3572 start.go:369] acquired machines lock for "docker-flags-949000" in 2.081875ms
	I0911 04:33:30.237101    3572 start.go:93] Provisioning new machine with config: &{Name:docker-flags-949000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-949000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:30.237384    3572 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:30.243114    3572 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:30.289381    3572 start.go:159] libmachine.API.Create for "docker-flags-949000" (driver="qemu2")
	I0911 04:33:30.289433    3572 client.go:168] LocalClient.Create starting
	I0911 04:33:30.289548    3572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:30.289596    3572 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:30.289613    3572 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:30.289696    3572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:30.289735    3572 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:30.289746    3572 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:30.290265    3572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:30.421752    3572 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:30.533031    3572 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:30.533036    3572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:30.533175    3572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2
	I0911 04:33:30.541616    3572 main.go:141] libmachine: STDOUT: 
	I0911 04:33:30.541631    3572 main.go:141] libmachine: STDERR: 
	I0911 04:33:30.541685    3572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2 +20000M
	I0911 04:33:30.548808    3572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:30.548822    3572 main.go:141] libmachine: STDERR: 
	I0911 04:33:30.548832    3572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2
	I0911 04:33:30.548838    3572 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:30.548892    3572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4e:1f:35:09:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/docker-flags-949000/disk.qcow2
	I0911 04:33:30.550473    3572 main.go:141] libmachine: STDOUT: 
	I0911 04:33:30.550485    3572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:30.550496    3572 client.go:171] LocalClient.Create took 261.055625ms
	I0911 04:33:32.552657    3572 start.go:128] duration metric: createHost completed in 2.315253417s
	I0911 04:33:32.552720    3572 start.go:83] releasing machines lock for "docker-flags-949000", held for 2.315757375s
	W0911 04:33:32.553119    3572 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-949000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-949000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:32.564793    3572 out.go:177] 
	W0911 04:33:32.569930    3572 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:33:32.569952    3572 out.go:239] * 
	* 
	W0911 04:33:32.572757    3572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:33:32.580645    3572 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-949000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-949000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-949000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (75.186125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-949000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-949000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-949000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-949000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-949000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-949000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (44.544041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-949000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-949000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-949000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-949000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-09-11 04:33:32.716809 -0700 PDT m=+3605.264191709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-949000 -n docker-flags-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-949000 -n docker-flags-949000: exit status 7 (27.902583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-949000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-949000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-949000
--- FAIL: TestDockerFlags (10.06s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-064000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-064000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.858605459s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-064000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-064000 in cluster force-systemd-flag-064000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:33:17.778652    3550 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:33:17.778761    3550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:17.778764    3550 out.go:309] Setting ErrFile to fd 2...
	I0911 04:33:17.778766    3550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:17.778871    3550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:33:17.779842    3550 out.go:303] Setting JSON to false
	I0911 04:33:17.794619    3550 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3771,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:33:17.794689    3550 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:33:17.799850    3550 out.go:177] * [force-systemd-flag-064000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:33:17.806871    3550 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:33:17.810767    3550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:33:17.806914    3550 notify.go:220] Checking for updates...
	I0911 04:33:17.817560    3550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:33:17.824814    3550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:33:17.827811    3550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:33:17.830767    3550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:33:17.834122    3550 config.go:182] Loaded profile config "force-systemd-env-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:33:17.834171    3550 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:33:17.838807    3550 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:33:17.845752    3550 start.go:298] selected driver: qemu2
	I0911 04:33:17.845757    3550 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:33:17.845768    3550 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:33:17.847676    3550 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:33:17.850791    3550 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:33:17.853908    3550 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 04:33:17.853929    3550 cni.go:84] Creating CNI manager for ""
	I0911 04:33:17.853943    3550 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:33:17.853950    3550 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:33:17.853956    3550 start_flags.go:321] config:
	{Name:force-systemd-flag-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-064000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:33:17.858186    3550 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:33:17.865628    3550 out.go:177] * Starting control plane node force-systemd-flag-064000 in cluster force-systemd-flag-064000
	I0911 04:33:17.869796    3550 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:33:17.869813    3550 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:33:17.869829    3550 cache.go:57] Caching tarball of preloaded images
	I0911 04:33:17.869879    3550 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:33:17.869884    3550 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:33:17.869963    3550 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/force-systemd-flag-064000/config.json ...
	I0911 04:33:17.869976    3550 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/force-systemd-flag-064000/config.json: {Name:mkd0d38e5f82acce0fe01f5620d7a78bf51165bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:33:17.870183    3550 start.go:365] acquiring machines lock for force-systemd-flag-064000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:17.870216    3550 start.go:369] acquired machines lock for "force-systemd-flag-064000" in 24.375µs
	I0911 04:33:17.870227    3550 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-064000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:17.870266    3550 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:17.877730    3550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:17.893839    3550 start.go:159] libmachine.API.Create for "force-systemd-flag-064000" (driver="qemu2")
	I0911 04:33:17.893870    3550 client.go:168] LocalClient.Create starting
	I0911 04:33:17.893924    3550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:17.893950    3550 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:17.893961    3550 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:17.894001    3550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:17.894022    3550 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:17.894037    3550 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:17.894369    3550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:18.008979    3550 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:18.215234    3550 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:18.215241    3550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:18.215466    3550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I0911 04:33:18.224324    3550 main.go:141] libmachine: STDOUT: 
	I0911 04:33:18.224341    3550 main.go:141] libmachine: STDERR: 
	I0911 04:33:18.224418    3550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2 +20000M
	I0911 04:33:18.231684    3550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:18.231698    3550 main.go:141] libmachine: STDERR: 
	I0911 04:33:18.231711    3550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I0911 04:33:18.231717    3550 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:18.231747    3550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:7a:87:77:ce:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I0911 04:33:18.233279    3550 main.go:141] libmachine: STDOUT: 
	I0911 04:33:18.233294    3550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:18.233312    3550 client.go:171] LocalClient.Create took 339.436042ms
	I0911 04:33:20.235463    3550 start.go:128] duration metric: createHost completed in 2.365181s
	I0911 04:33:20.235550    3550 start.go:83] releasing machines lock for "force-systemd-flag-064000", held for 2.36530375s
	W0911 04:33:20.235620    3550 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:20.255718    3550 out.go:177] * Deleting "force-systemd-flag-064000" in qemu2 ...
	W0911 04:33:20.270280    3550 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:20.270301    3550 start.go:687] Will try again in 5 seconds ...
	I0911 04:33:25.272516    3550 start.go:365] acquiring machines lock for force-systemd-flag-064000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:25.272947    3550 start.go:369] acquired machines lock for "force-systemd-flag-064000" in 307.542µs
	I0911 04:33:25.273081    3550 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-064000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:25.273440    3550 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:25.282407    3550 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:25.329190    3550 start.go:159] libmachine.API.Create for "force-systemd-flag-064000" (driver="qemu2")
	I0911 04:33:25.329229    3550 client.go:168] LocalClient.Create starting
	I0911 04:33:25.329437    3550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:25.329495    3550 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:25.329528    3550 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:25.329605    3550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:25.329645    3550 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:25.329659    3550 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:25.330376    3550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:25.461169    3550 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:25.554472    3550 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:25.554481    3550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:25.554610    3550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I0911 04:33:25.562925    3550 main.go:141] libmachine: STDOUT: 
	I0911 04:33:25.562940    3550 main.go:141] libmachine: STDERR: 
	I0911 04:33:25.562989    3550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2 +20000M
	I0911 04:33:25.570183    3550 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:25.570205    3550 main.go:141] libmachine: STDERR: 
	I0911 04:33:25.570224    3550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I0911 04:33:25.570230    3550 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:25.570275    3550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:de:c0:6f:8e:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I0911 04:33:25.571787    3550 main.go:141] libmachine: STDOUT: 
	I0911 04:33:25.571801    3550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:25.571814    3550 client.go:171] LocalClient.Create took 242.580917ms
	I0911 04:33:27.573987    3550 start.go:128] duration metric: createHost completed in 2.300497s
	I0911 04:33:27.574039    3550 start.go:83] releasing machines lock for "force-systemd-flag-064000", held for 2.301070208s
	W0911 04:33:27.574449    3550 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:27.583143    3550 out.go:177] 
	W0911 04:33:27.587155    3550 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:33:27.587179    3550 out.go:239] * 
	* 
	W0911 04:33:27.589904    3550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:33:27.598133    3550 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-064000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-064000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-064000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (72.979417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-064000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-064000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-11 04:33:27.687006 -0700 PDT m=+3600.234383584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-064000 -n force-systemd-flag-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-064000 -n force-systemd-flag-064000: exit status 7 (34.102459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-064000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-064000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (10.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-623000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-623000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.148486875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-623000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-623000 in cluster force-systemd-env-623000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-623000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:33:12.449362    3518 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:33:12.449471    3518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:12.449474    3518 out.go:309] Setting ErrFile to fd 2...
	I0911 04:33:12.449476    3518 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:33:12.449578    3518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:33:12.450561    3518 out.go:303] Setting JSON to false
	I0911 04:33:12.465667    3518 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3766,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:33:12.465734    3518 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:33:12.470869    3518 out.go:177] * [force-systemd-env-623000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:33:12.478847    3518 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:33:12.481822    3518 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:33:12.478908    3518 notify.go:220] Checking for updates...
	I0911 04:33:12.487810    3518 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:33:12.490823    3518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:33:12.493813    3518 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:33:12.496809    3518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0911 04:33:12.499882    3518 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:33:12.503766    3518 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:33:12.510826    3518 start.go:298] selected driver: qemu2
	I0911 04:33:12.510832    3518 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:33:12.510839    3518 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:33:12.512819    3518 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:33:12.515840    3518 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:33:12.518939    3518 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 04:33:12.518966    3518 cni.go:84] Creating CNI manager for ""
	I0911 04:33:12.518974    3518 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:33:12.518978    3518 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:33:12.518984    3518 start_flags.go:321] config:
	{Name:force-systemd-env-623000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-623000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:33:12.523040    3518 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:33:12.529827    3518 out.go:177] * Starting control plane node force-systemd-env-623000 in cluster force-systemd-env-623000
	I0911 04:33:12.533630    3518 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:33:12.533650    3518 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:33:12.533668    3518 cache.go:57] Caching tarball of preloaded images
	I0911 04:33:12.533729    3518 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:33:12.533735    3518 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:33:12.533936    3518 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/force-systemd-env-623000/config.json ...
	I0911 04:33:12.533948    3518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/force-systemd-env-623000/config.json: {Name:mkecb3c3756188040666dbf0dffc1f35f4fd0ea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:33:12.534164    3518 start.go:365] acquiring machines lock for force-systemd-env-623000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:12.534195    3518 start.go:369] acquired machines lock for "force-systemd-env-623000" in 24.583µs
	I0911 04:33:12.534206    3518 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-623000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:12.534234    3518 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:12.538804    3518 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:12.554875    3518 start.go:159] libmachine.API.Create for "force-systemd-env-623000" (driver="qemu2")
	I0911 04:33:12.554902    3518 client.go:168] LocalClient.Create starting
	I0911 04:33:12.554960    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:12.554983    3518 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:12.554993    3518 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:12.555027    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:12.555044    3518 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:12.555053    3518 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:12.555368    3518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:12.672956    3518 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:12.776466    3518 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:12.776472    3518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:12.776617    3518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2
	I0911 04:33:12.785360    3518 main.go:141] libmachine: STDOUT: 
	I0911 04:33:12.785371    3518 main.go:141] libmachine: STDERR: 
	I0911 04:33:12.785424    3518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2 +20000M
	I0911 04:33:12.792553    3518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:12.792563    3518 main.go:141] libmachine: STDERR: 
	I0911 04:33:12.792576    3518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2
	I0911 04:33:12.792582    3518 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:12.792615    3518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:c4:d2:f5:72:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2
	I0911 04:33:12.794111    3518 main.go:141] libmachine: STDOUT: 
	I0911 04:33:12.794123    3518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:12.794140    3518 client.go:171] LocalClient.Create took 239.231333ms
	I0911 04:33:14.796229    3518 start.go:128] duration metric: createHost completed in 2.261988375s
	I0911 04:33:14.796260    3518 start.go:83] releasing machines lock for "force-systemd-env-623000", held for 2.26206275s
	W0911 04:33:14.796278    3518 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:14.804993    3518 out.go:177] * Deleting "force-systemd-env-623000" in qemu2 ...
	W0911 04:33:14.812799    3518 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:14.812810    3518 start.go:687] Will try again in 5 seconds ...
	I0911 04:33:19.813416    3518 start.go:365] acquiring machines lock for force-systemd-env-623000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:33:20.235711    3518 start.go:369] acquired machines lock for "force-systemd-env-623000" in 422.156416ms
	I0911 04:33:20.235836    3518 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-623000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:33:20.236163    3518 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:33:20.242712    3518 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:33:20.290106    3518 start.go:159] libmachine.API.Create for "force-systemd-env-623000" (driver="qemu2")
	I0911 04:33:20.290152    3518 client.go:168] LocalClient.Create starting
	I0911 04:33:20.290321    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:33:20.290383    3518 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:20.290404    3518 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:20.290475    3518 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:33:20.290516    3518 main.go:141] libmachine: Decoding PEM data...
	I0911 04:33:20.290534    3518 main.go:141] libmachine: Parsing certificate...
	I0911 04:33:20.291133    3518 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:33:20.425966    3518 main.go:141] libmachine: Creating SSH key...
	I0911 04:33:20.512443    3518 main.go:141] libmachine: Creating Disk image...
	I0911 04:33:20.512449    3518 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:33:20.512602    3518 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2
	I0911 04:33:20.521077    3518 main.go:141] libmachine: STDOUT: 
	I0911 04:33:20.521091    3518 main.go:141] libmachine: STDERR: 
	I0911 04:33:20.521149    3518 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2 +20000M
	I0911 04:33:20.528205    3518 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:33:20.528217    3518 main.go:141] libmachine: STDERR: 
	I0911 04:33:20.528230    3518 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2
	I0911 04:33:20.528241    3518 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:33:20.528283    3518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2e:7c:aa:d9:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/force-systemd-env-623000/disk.qcow2
	I0911 04:33:20.529799    3518 main.go:141] libmachine: STDOUT: 
	I0911 04:33:20.529812    3518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:33:20.529823    3518 client.go:171] LocalClient.Create took 239.666208ms
	I0911 04:33:22.532097    3518 start.go:128] duration metric: createHost completed in 2.295862625s
	I0911 04:33:22.532160    3518 start.go:83] releasing machines lock for "force-systemd-env-623000", held for 2.296425s
	W0911 04:33:22.532607    3518 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:33:22.541172    3518 out.go:177] 
	W0911 04:33:22.545203    3518 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:33:22.545252    3518 out.go:239] * 
	* 
	W0911 04:33:22.547811    3518 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:33:22.556098    3518 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-623000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-623000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-623000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (76.624708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-623000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-623000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-11 04:33:22.649145 -0700 PDT m=+3595.196517793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-623000 -n force-systemd-env-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-623000 -n force-systemd-env-623000: exit status 7 (34.399542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-623000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-623000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-623000
--- FAIL: TestForceSystemdEnv (10.36s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-942000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-942000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 90 (1m19.034343583s)

                                                
                                                
-- stdout --
	* [functional-942000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node functional-942000 in cluster functional-942000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Found network options:
	  - HTTP_PROXY=localhost:49381
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49381 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49381 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.105.4).
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-942000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (113.169708ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:36:30.758488    1609 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/StartWithProxy (79.15s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (120.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-942000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-942000 --alsologtostderr -v=8: exit status 90 (2m0.091568333s)

                                                
                                                
-- stdout --
	* [functional-942000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-942000 in cluster functional-942000
	* Updating the running qemu2 "functional-942000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 03:36:30.790278    1611 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:36:30.790388    1611 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:36:30.790392    1611 out.go:309] Setting ErrFile to fd 2...
	I0911 03:36:30.790394    1611 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:36:30.790517    1611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 03:36:30.791498    1611 out.go:303] Setting JSON to false
	I0911 03:36:30.806544    1611 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":364,"bootTime":1694428226,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 03:36:30.806605    1611 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:36:30.811201    1611 out.go:177] * [functional-942000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:36:30.818183    1611 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 03:36:30.822137    1611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 03:36:30.818230    1611 notify.go:220] Checking for updates...
	I0911 03:36:30.831111    1611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:36:30.834135    1611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:36:30.837147    1611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 03:36:30.840168    1611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:36:30.843431    1611 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:36:30.843481    1611 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:36:30.848279    1611 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 03:36:30.855165    1611 start.go:298] selected driver: qemu2
	I0911 03:36:30.855172    1611 start.go:902] validating driver "qemu2" against &{Name:functional-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-942000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:36:30.855225    1611 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:36:30.857107    1611 cni.go:84] Creating CNI manager for ""
	I0911 03:36:30.857125    1611 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:36:30.857131    1611 start_flags.go:321] config:
	{Name:functional-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-942000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:36:30.861114    1611 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:36:30.868133    1611 out.go:177] * Starting control plane node functional-942000 in cluster functional-942000
	I0911 03:36:30.872130    1611 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:36:30.872150    1611 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:36:30.872163    1611 cache.go:57] Caching tarball of preloaded images
	I0911 03:36:30.872227    1611 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 03:36:30.872235    1611 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 03:36:30.872317    1611 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/functional-942000/config.json ...
	I0911 03:36:30.872696    1611 start.go:365] acquiring machines lock for functional-942000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 03:36:30.872733    1611 start.go:369] acquired machines lock for "functional-942000" in 30.042µs
	I0911 03:36:30.872743    1611 start.go:96] Skipping create...Using existing machine configuration
	I0911 03:36:30.872748    1611 fix.go:54] fixHost starting: 
	I0911 03:36:30.873357    1611 fix.go:102] recreateIfNeeded on functional-942000: state=Running err=<nil>
	W0911 03:36:30.873367    1611 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 03:36:30.881157    1611 out.go:177] * Updating the running qemu2 "functional-942000" VM ...
	I0911 03:36:30.884042    1611 machine.go:88] provisioning docker machine ...
	I0911 03:36:30.884055    1611 buildroot.go:166] provisioning hostname "functional-942000"
	I0911 03:36:30.884081    1611 main.go:141] libmachine: Using SSH client type: native
	I0911 03:36:30.884378    1611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f763b0] 0x100f78e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:36:30.884384    1611 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-942000 && echo "functional-942000" | sudo tee /etc/hostname
	I0911 03:36:30.956113    1611 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-942000
	
	I0911 03:36:30.956163    1611 main.go:141] libmachine: Using SSH client type: native
	I0911 03:36:30.956451    1611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f763b0] 0x100f78e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:36:30.956461    1611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-942000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-942000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-942000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 03:36:31.022710    1611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:36:31.022721    1611 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17225-951/.minikube CaCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17225-951/.minikube}
	I0911 03:36:31.022732    1611 buildroot.go:174] setting up certificates
	I0911 03:36:31.022740    1611 provision.go:83] configureAuth start
	I0911 03:36:31.022745    1611 provision.go:138] copyHostCerts
	I0911 03:36:31.022780    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem
	I0911 03:36:31.022834    1611 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem, removing ...
	I0911 03:36:31.022840    1611 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem
	I0911 03:36:31.022955    1611 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem (1078 bytes)
	I0911 03:36:31.023129    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem
	I0911 03:36:31.023156    1611 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem, removing ...
	I0911 03:36:31.023160    1611 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem
	I0911 03:36:31.023213    1611 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem (1123 bytes)
	I0911 03:36:31.023309    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem
	I0911 03:36:31.023334    1611 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem, removing ...
	I0911 03:36:31.023337    1611 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem
	I0911 03:36:31.023386    1611 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem (1675 bytes)
	I0911 03:36:31.023484    1611 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem org=jenkins.functional-942000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-942000]
	I0911 03:36:31.128611    1611 provision.go:172] copyRemoteCerts
	I0911 03:36:31.128655    1611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 03:36:31.128668    1611 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
	I0911 03:36:31.163996    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 03:36:31.164056    1611 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 03:36:31.171734    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 03:36:31.171777    1611 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 03:36:31.178693    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 03:36:31.178738    1611 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 03:36:31.185398    1611 provision.go:86] duration metric: configureAuth took 162.652625ms
	I0911 03:36:31.185405    1611 buildroot.go:189] setting minikube options for container-runtime
	I0911 03:36:31.185504    1611 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:36:31.185541    1611 main.go:141] libmachine: Using SSH client type: native
	I0911 03:36:31.185758    1611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f763b0] 0x100f78e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:36:31.185766    1611 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 03:36:31.246788    1611 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 03:36:31.246799    1611 buildroot.go:70] root file system type: tmpfs
	I0911 03:36:31.246855    1611 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 03:36:31.246903    1611 main.go:141] libmachine: Using SSH client type: native
	I0911 03:36:31.247152    1611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f763b0] 0x100f78e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:36:31.247202    1611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 03:36:31.314218    1611 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 03:36:31.314263    1611 main.go:141] libmachine: Using SSH client type: native
	I0911 03:36:31.314498    1611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f763b0] 0x100f78e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:36:31.314511    1611 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 03:36:31.380122    1611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:36:31.380133    1611 machine.go:91] provisioned docker machine in 496.093542ms
	I0911 03:36:31.380137    1611 start.go:300] post-start starting for "functional-942000" (driver="qemu2")
	I0911 03:36:31.380144    1611 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 03:36:31.380211    1611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 03:36:31.380230    1611 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
	I0911 03:36:31.416802    1611 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 03:36:31.418199    1611 command_runner.go:130] > NAME=Buildroot
	I0911 03:36:31.418205    1611 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0911 03:36:31.418208    1611 command_runner.go:130] > ID=buildroot
	I0911 03:36:31.418212    1611 command_runner.go:130] > VERSION_ID=2021.02.12
	I0911 03:36:31.418216    1611 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0911 03:36:31.418438    1611 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 03:36:31.418448    1611 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/addons for local assets ...
	I0911 03:36:31.418521    1611 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/files for local assets ...
	I0911 03:36:31.418652    1611 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem -> 13932.pem in /etc/ssl/certs
	I0911 03:36:31.418657    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem -> /etc/ssl/certs/13932.pem
	I0911 03:36:31.418780    1611 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/test/nested/copy/1393/hosts -> hosts in /etc/test/nested/copy/1393
	I0911 03:36:31.418784    1611 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/test/nested/copy/1393/hosts -> /etc/test/nested/copy/1393/hosts
	I0911 03:36:31.418818    1611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1393
	I0911 03:36:31.421912    1611 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem --> /etc/ssl/certs/13932.pem (1708 bytes)
	I0911 03:36:31.429329    1611 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/test/nested/copy/1393/hosts --> /etc/test/nested/copy/1393/hosts (40 bytes)
	I0911 03:36:31.436359    1611 start.go:303] post-start completed in 56.216625ms
	I0911 03:36:31.436367    1611 fix.go:56] fixHost completed within 563.629584ms
	I0911 03:36:31.436401    1611 main.go:141] libmachine: Using SSH client type: native
	I0911 03:36:31.436631    1611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100f763b0] 0x100f78e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:36:31.436636    1611 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0911 03:36:31.494421    1611 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694428591.558615359
	
	I0911 03:36:31.494428    1611 fix.go:206] guest clock: 1694428591.558615359
	I0911 03:36:31.494434    1611 fix.go:219] Guest: 2023-09-11 03:36:31.558615359 -0700 PDT Remote: 2023-09-11 03:36:31.436368 -0700 PDT m=+0.664778251 (delta=122.247359ms)
	I0911 03:36:31.494447    1611 fix.go:190] guest clock delta is within tolerance: 122.247359ms
	I0911 03:36:31.494450    1611 start.go:83] releasing machines lock for "functional-942000", held for 621.723458ms
	I0911 03:36:31.494718    1611 ssh_runner.go:195] Run: cat /version.json
	I0911 03:36:31.494727    1611 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
	I0911 03:36:31.494718    1611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 03:36:31.494760    1611 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
	I0911 03:36:31.568650    1611 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 03:36:31.568930    1611 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0911 03:36:31.569005    1611 ssh_runner.go:195] Run: systemctl --version
	I0911 03:36:31.571329    1611 command_runner.go:130] > systemd 247 (247)
	I0911 03:36:31.571344    1611 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0911 03:36:31.571499    1611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 03:36:31.573664    1611 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0911 03:36:31.573679    1611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 03:36:31.573711    1611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 03:36:31.577224    1611 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 03:36:31.577233    1611 start.go:466] detecting cgroup driver to use...
	I0911 03:36:31.577308    1611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:36:31.583313    1611 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0911 03:36:31.583572    1611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0911 03:36:31.586837    1611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 03:36:31.589804    1611 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 03:36:31.589829    1611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 03:36:31.593003    1611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:36:31.596155    1611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 03:36:31.598926    1611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:36:31.601815    1611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 03:36:31.605180    1611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 03:36:31.608387    1611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 03:36:31.611120    1611 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0911 03:36:31.611161    1611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 03:36:31.613810    1611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:36:31.683924    1611 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 03:36:31.690089    1611 start.go:466] detecting cgroup driver to use...
	I0911 03:36:31.690150    1611 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 03:36:31.695901    1611 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0911 03:36:31.696397    1611 command_runner.go:130] > [Unit]
	I0911 03:36:31.696403    1611 command_runner.go:130] > Description=Docker Application Container Engine
	I0911 03:36:31.696411    1611 command_runner.go:130] > Documentation=https://docs.docker.com
	I0911 03:36:31.696415    1611 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0911 03:36:31.696418    1611 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0911 03:36:31.696421    1611 command_runner.go:130] > StartLimitBurst=3
	I0911 03:36:31.696423    1611 command_runner.go:130] > StartLimitIntervalSec=60
	I0911 03:36:31.696425    1611 command_runner.go:130] > [Service]
	I0911 03:36:31.696428    1611 command_runner.go:130] > Type=notify
	I0911 03:36:31.696430    1611 command_runner.go:130] > Restart=on-failure
	I0911 03:36:31.696434    1611 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0911 03:36:31.696440    1611 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0911 03:36:31.696444    1611 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0911 03:36:31.696448    1611 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0911 03:36:31.696453    1611 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0911 03:36:31.696456    1611 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0911 03:36:31.696461    1611 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0911 03:36:31.696470    1611 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0911 03:36:31.696475    1611 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0911 03:36:31.696477    1611 command_runner.go:130] > ExecStart=
	I0911 03:36:31.696486    1611 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	I0911 03:36:31.696490    1611 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0911 03:36:31.696494    1611 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0911 03:36:31.696498    1611 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0911 03:36:31.696500    1611 command_runner.go:130] > LimitNOFILE=infinity
	I0911 03:36:31.696502    1611 command_runner.go:130] > LimitNPROC=infinity
	I0911 03:36:31.696504    1611 command_runner.go:130] > LimitCORE=infinity
	I0911 03:36:31.696507    1611 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0911 03:36:31.696510    1611 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0911 03:36:31.696512    1611 command_runner.go:130] > TasksMax=infinity
	I0911 03:36:31.696514    1611 command_runner.go:130] > TimeoutStartSec=0
	I0911 03:36:31.696519    1611 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0911 03:36:31.696522    1611 command_runner.go:130] > Delegate=yes
	I0911 03:36:31.696526    1611 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0911 03:36:31.696529    1611 command_runner.go:130] > KillMode=process
	I0911 03:36:31.696532    1611 command_runner.go:130] > [Install]
	I0911 03:36:31.696538    1611 command_runner.go:130] > WantedBy=multi-user.target
	I0911 03:36:31.696790    1611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:36:31.702289    1611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 03:36:31.710209    1611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:36:31.715242    1611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:36:31.719720    1611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:36:31.725004    1611 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0911 03:36:31.725154    1611 ssh_runner.go:195] Run: which cri-dockerd
	I0911 03:36:31.726573    1611 command_runner.go:130] > /usr/bin/cri-dockerd
	I0911 03:36:31.726657    1611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 03:36:31.729350    1611 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 03:36:31.734558    1611 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 03:36:31.811857    1611 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 03:36:31.875774    1611 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 03:36:31.875788    1611 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 03:36:31.881162    1611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:36:31.947882    1611 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:38:30.822701    1611 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0911 03:38:30.822769    1611 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xe" for details.
	I0911 03:38:30.823016    1611 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m58.877016208s)
	I0911 03:38:30.826965    1611 out.go:177] 
	W0911 03:38:30.831100    1611 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0911 03:38:30.831135    1611 out.go:239] * 
	* 
	W0911 03:38:30.834008    1611 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:38:30.845008    1611 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-942000 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m0.098497458s for "functional-942000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (112.876ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:38:30.968941    1641 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/SoftStart (120.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (25.902625ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-942000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (73.01425ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:38:31.069093    1644 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-942000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-942000 get po -A: exit status 1 (26.359708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-942000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-942000\n"*: args "kubectl --context functional-942000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-942000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (73.305208ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:38:31.169320    1647 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (60.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl images: exit status 1 (1m0.046539416s)

                                                
                                                
-- stdout --
	FATA[0059] listing images: rpc error: code = Unknown desc = error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.42/images/json": read unix @->/var/run/docker.sock: read: connection reset by peer 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	FATA[0059] listing images: rpc error: code = Unknown desc = error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.42/images/json": read unix @->/var/run/docker.sock: read: connection reset by peer 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (60.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (301.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (1m0.12957125s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-942000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1m0.348506792s)

                                                
                                                
-- stdout --
	FATA[0060] image status for "registry.k8s.io/pause:latest" request: rpc error: code = Unknown desc = error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.42/images/registry.k8s.io/pause:latest/json": read unix @->/run/docker.sock: read: connection reset by peer 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 cache reload: (2m0.278119958s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1m0.465917625s)

                                                
                                                
-- stdout --
	FATA[0060] image status for "registry.k8s.io/pause:latest" request: rpc error: code = Unknown desc = error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.42/images/registry.k8s.io/pause:latest/json": read unix @->/run/docker.sock: read: connection reset by peer 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-942000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (301.22s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 kubectl -- --context functional-942000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 kubectl -- --context functional-942000 get pods: exit status 1 (458.089458ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-942000
	* no server found for cluster "functional-942000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-942000 kubectl -- --context functional-942000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (76.1405ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:51:34.554743    1896 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-942000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-942000 get pods: exit status 1 (580.350958ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-942000
	* no server found for cluster "functional-942000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-942000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (71.528584ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:51:35.207724    1901 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (118.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-942000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-942000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (1m58.79532025s)

                                                
                                                
-- stdout --
	* [functional-942000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node functional-942000 in cluster functional-942000
	* Updating the running qemu2 "functional-942000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-942000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 1m58.795827084s for "functional-942000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (111.242291ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:53:34.112834    1930 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/ExtraConfig (118.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-942000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-942000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (26.721708ms)

                                                
                                                
** stderr ** 
	error: context "functional-942000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-942000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (73.879667ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:53:34.213403    1933 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (180.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd12173273/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd12173273/001/logs.txt: (3m0.750629875s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:57:34.878183    1968 logs.go:281] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0911 03:57:34.897604    1968 logs.go:281] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0911 03:57:34.911746    1968 logs.go:281] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0911 03:57:34.923026    1968 logs.go:281] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0911 03:57:34.932551    1968 logs.go:281] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0911 03:57:34.940825    1968 logs.go:281] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0911 03:57:34.947840    1968 logs.go:281] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0911 03:59:35.401498    1968 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-11T10:58:35Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.42/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/run/docker.sock: read: connection reset by peer"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-09-11T10:58:35Z\" level=fatal msg=\"listing containers: rpc error: code = Unknown desc = error during connect: Get \\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.42/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\\\": read unix @->/run/docker.sock: read: connection reset by peer\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0911 03:59:35.424377    1968 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v1.28.1/kubectl: command not found
	 output: "\n** stderr ** \nsudo: /var/lib/minikube/binaries/v1.28.1/kubectl: command not found\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (180.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-942000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-942000 apply -f testdata/invalidsvc.yaml: exit status 1 (52.274375ms)

                                                
                                                
** stderr ** 
	error: context "functional-942000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-942000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-942000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-942000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-942000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-942000 --alsologtostderr -v=1] stderr:
I0911 04:07:21.581053    2350 out.go:296] Setting OutFile to fd 1 ...
I0911 04:07:21.581289    2350 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:07:21.581291    2350 out.go:309] Setting ErrFile to fd 2...
I0911 04:07:21.581294    2350 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:07:21.581395    2350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:07:21.581615    2350 mustload.go:65] Loading cluster: functional-942000
I0911 04:07:21.581845    2350 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:07:21.582532    2350 host.go:66] Checking if "functional-942000" exists ...
I0911 04:07:21.582642    2350 api_server.go:166] Checking apiserver status ...
I0911 04:07:21.582667    2350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0911 04:07:21.582674    2350 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
W0911 04:07:21.616684    2350 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0911 04:07:21.620502    2350 out.go:177] * This control plane is not running! (state=Stopped)
W0911 04:07:21.623680    2350 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p functional-942000"
! This is unusual - you may want to investigate using "minikube logs -p functional-942000"
I0911 04:07:21.626662    2350 out.go:177]   To start a cluster, run: "minikube start -p functional-942000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (82.576459ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:07:21.817454    2353 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 status: exit status 6 (71.581875ms)

                                                
                                                
-- stdout --
	functional-942000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:07:03.797920    2251 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-942000 status" : exit status 6
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 6 (72.271375ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Misconfigured
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:07:03.870660    2253 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-942000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 6
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 status -o json: exit status 6 (71.937375ms)

                                                
                                                
-- stdout --
	{"Name":"functional-942000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:07:03.942557    2255 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-942000 status -o json" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (71.841084ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:07:04.014608    2257 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/StatusCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-942000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1626: (dbg) Non-zero exit: kubectl --context functional-942000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.710209ms)

                                                
                                                
** stderr ** 
	error: context "functional-942000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1632: failed to create hello-node deployment with this command "kubectl --context functional-942000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-942000 describe po hello-node-connect
functional_test.go:1601: (dbg) Non-zero exit: kubectl --context functional-942000 describe po hello-node-connect: exit status 1 (24.853625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:1603: "kubectl --context functional-942000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1605: hello-node pod describe:
functional_test.go:1607: (dbg) Run:  kubectl --context functional-942000 logs -l app=hello-node-connect
functional_test.go:1607: (dbg) Non-zero exit: kubectl --context functional-942000 logs -l app=hello-node-connect: exit status 1 (25.024625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:1609: "kubectl --context functional-942000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1611: hello-node logs:
functional_test.go:1613: (dbg) Run:  kubectl --context functional-942000 describe svc hello-node-connect
functional_test.go:1613: (dbg) Non-zero exit: kubectl --context functional-942000 describe svc hello-node-connect: exit status 1 (24.965375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:1615: "kubectl --context functional-942000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1617: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (73.8515ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:07:03.725928    2249 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-942000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (120.462ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:07:03.435509    2237 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1393.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/ssl/certs/1393.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/ssl/certs/1393.pem": exit status 1 (65.071208ms)

                                                
                                                
-- stdout --
	cat: /etc/ssl/certs/1393.pem: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/1393.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-942000 ssh \"sudo cat /etc/ssl/certs/1393.pem\"": exit status 1
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/1393.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	cat: /etc/ssl/certs/1393.pem: No such file or directory
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1393.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /usr/share/ca-certificates/1393.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /usr/share/ca-certificates/1393.pem": exit status 1 (63.897375ms)

                                                
                                                
-- stdout --
	cat: /usr/share/ca-certificates/1393.pem: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/1393.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-942000 ssh \"sudo cat /usr/share/ca-certificates/1393.pem\"": exit status 1
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/1393.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	cat: /usr/share/ca-certificates/1393.pem: No such file or directory
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 1 (63.864417ms)

                                                
                                                
-- stdout --
	cat: /etc/ssl/certs/51391683.0: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-942000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 1
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	cat: /etc/ssl/certs/51391683.0: No such file or directory
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/ssl/certs/13932.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /usr/share/ca-certificates/13932.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /usr/share/ca-certificates/13932.pem": exit status 1 (63.006167ms)

                                                
                                                
-- stdout --
	cat: /usr/share/ca-certificates/13932.pem: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/13932.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-942000 ssh \"sudo cat /usr/share/ca-certificates/13932.pem\"": exit status 1
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/13932.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	cat: /usr/share/ca-certificates/13932.pem: No such file or directory
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 1 (63.791458ms)

                                                
                                                
-- stdout --
	cat: /etc/ssl/certs/3ec20f2e.0: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-942000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 1
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	cat: /etc/ssl/certs/3ec20f2e.0: No such file or directory
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (72.651625ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:04:36.903823    2163 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/CertSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-942000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-942000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.203542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-942000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-942000 -n functional-942000: exit status 6 (114.796542ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 03:59:36.171529    2037 status.go:415] kubeconfig endpoint: extract IP: "functional-942000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-942000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-942000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-942000 image ls --format short --alsologtostderr:
I0911 04:09:37.167453    2400 out.go:296] Setting OutFile to fd 1 ...
I0911 04:09:37.167596    2400 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:09:37.167599    2400 out.go:309] Setting ErrFile to fd 2...
I0911 04:09:37.167602    2400 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:09:37.167719    2400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:09:37.168121    2400 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:09:37.168180    2400 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
W0911 04:09:37.168408    2400 cache_images.go:695] error getting status for functional-942000: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/monitor: connect: connection refused
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (60.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls --format table --alsologtostderr: (1m0.143429916s)
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-942000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-942000 image ls --format table --alsologtostderr:
I0911 04:11:37.439831    2422 out.go:296] Setting OutFile to fd 1 ...
I0911 04:11:37.440034    2422 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:11:37.440038    2422 out.go:309] Setting ErrFile to fd 2...
I0911 04:11:37.440041    2422 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:11:37.440198    2422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:11:37.440786    2422 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:11:37.440868    2422 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:11:37.442041    2422 ssh_runner.go:195] Run: systemctl --version
I0911 04:11:37.442054    2422 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
I0911 04:11:37.479817    2422 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0911 04:12:37.512744    2422 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (1m0.034275292s)
W0911 04:12:37.512896    2422 cache_images.go:715] Failed to list images for profile functional-942000 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (60.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (60.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls --format json --alsologtostderr: (1m0.137561708s)
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-942000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-942000 image ls --format json --alsologtostderr:
I0911 04:10:37.298609    2415 out.go:296] Setting OutFile to fd 1 ...
I0911 04:10:37.298783    2415 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:10:37.298786    2415 out.go:309] Setting ErrFile to fd 2...
I0911 04:10:37.298789    2415 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:10:37.298936    2415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:10:37.299513    2415 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:10:37.299589    2415 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:10:37.300788    2415 ssh_runner.go:195] Run: systemctl --version
I0911 04:10:37.300797    2415 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
I0911 04:10:37.340438    2415 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0911 04:11:37.369802    2415 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (1m0.03071475s)
W0911 04:11:37.369989    2415 cache_images.go:715] Failed to list images for profile functional-942000 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (60.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (60.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls --format yaml --alsologtostderr: (1m0.115556708s)
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-942000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-942000 image ls --format yaml --alsologtostderr:
I0911 04:09:37.167450    2401 out.go:296] Setting OutFile to fd 1 ...
I0911 04:09:37.167580    2401 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:09:37.167584    2401 out.go:309] Setting ErrFile to fd 2...
I0911 04:09:37.167586    2401 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:09:37.167711    2401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:09:37.168099    2401 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:09:37.168159    2401 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:09:37.168954    2401 ssh_runner.go:195] Run: systemctl --version
I0911 04:09:37.168966    2401 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
I0911 04:09:37.200254    2401 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0911 04:10:37.228939    2401 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (1m0.03002725s)
W0911 04:10:37.229111    2401 cache_images.go:715] Failed to list images for profile functional-942000 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (60.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (120.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh pgrep buildkitd: exit status 1 (67.736083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image build -t localhost/my-image:functional-942000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image build -t localhost/my-image:functional-942000 testdata/build --alsologtostderr: (1m0.001263292s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-942000 image build -t localhost/my-image:functional-942000 testdata/build --alsologtostderr:
I0911 04:09:37.269767    2406 out.go:296] Setting OutFile to fd 1 ...
I0911 04:09:37.269992    2406 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:09:37.269994    2406 out.go:309] Setting ErrFile to fd 2...
I0911 04:09:37.269997    2406 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:09:37.270112    2406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:09:37.270532    2406 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:09:37.270930    2406 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:09:37.271726    2406 ssh_runner.go:195] Run: systemctl --version
I0911 04:09:37.271734    2406 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
I0911 04:09:37.304895    2406 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3225088620.tar
I0911 04:09:37.304945    2406 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0911 04:09:37.307702    2406 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3225088620.tar
I0911 04:09:37.308986    2406 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3225088620.tar: stat -c "%s %y" /var/lib/minikube/build/build.3225088620.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3225088620.tar': No such file or directory
I0911 04:09:37.309002    2406 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.3225088620.tar --> /var/lib/minikube/build/build.3225088620.tar (3072 bytes)
I0911 04:09:37.315709    2406 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3225088620
I0911 04:09:37.318818    2406 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3225088620 -xf /var/lib/minikube/build/build.3225088620.tar
I0911 04:09:37.322105    2406 docker.go:339] Building image: /var/lib/minikube/build/build.3225088620
I0911 04:09:37.322145    2406 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-942000 /var/lib/minikube/build/build.3225088620
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0911 04:10:37.234947    2406 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-942000 /var/lib/minikube/build/build.3225088620: (59.91416775s)
W0911 04:10:37.235059    2406 build_images.go:115] Failed to build image for profile functional-942000. make sure the profile is running. Docker build /var/lib/minikube/build/build.3225088620.tar: buildimage docker: docker build -t localhost/my-image:functional-942000 /var/lib/minikube/build/build.3225088620: Process exited with status 1
stdout:

                                                
                                                
stderr:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0911 04:10:37.235089    2406 build_images.go:123] succeeded building to: 
I0911 04:10:37.235097    2406 build_images.go:124] failed building to: functional-942000
W0911 04:10:37.235438    2406 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 42b7a9b6-1075-4fcd-a5c1-e375247b9714
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls
functional_test.go:447: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls: (1m0.147006958s)
functional_test.go:442: expected "localhost/my-image:functional-942000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (120.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (300.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-942000 docker-env) && out/minikube-darwin-arm64 status -p functional-942000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-942000 docker-env) && out/minikube-darwin-arm64 status -p functional-942000": signal: killed (5m0.158516416s)
functional_test.go:498: failed to run the command by deadline. exceeded timeout. /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-942000 docker-env) && out/minikube-darwin-arm64 status -p functional-942000"
functional_test.go:501: failed to do status after eval-ing docker-env. error: signal: killed
--- FAIL: TestFunctional/parallel/DockerEnv/bash (300.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (118.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image load --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image load --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr: (58.184104s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls
functional_test.go:447: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls: (1m0.135057375s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-942000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (118.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image load --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image load --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr: (1m0.379289958s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls
functional_test.go:447: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls: (1m0.139439s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-942000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.410916125s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-942000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image load --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image load --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr: (58.599161833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls
functional_test.go:447: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls: (1m0.13171625s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-942000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 update-context --alsologtostderr -v=2
functional_test.go:2122: update-context: got="* \"functional-942000\" context has been updated to point to 192.168.105.4:8441\n* Current context is \"functional-942000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-942000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1436: (dbg) Non-zero exit: kubectl --context functional-942000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.273083ms)

                                                
                                                
** stderr ** 
	error: context "functional-942000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1442: failed to create hello-node deployment with this command "kubectl --context functional-942000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 service list
functional_test.go:1458: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 service list: exit status 119 (77.507166ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-942000"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-942000"

                                                
                                                
** /stderr **
functional_test.go:1460: failed to do service list. args "out/minikube-darwin-arm64 -p functional-942000 service list" : exit status 119
functional_test.go:1463: expected 'service list' to contain *hello-node* but got -"* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-942000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 service list -o json
functional_test.go:1488: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 service list -o json: exit status 119 (77.879667ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-942000"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-942000"

                                                
                                                
** /stderr **
functional_test.go:1490: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-942000 service list -o json": exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 service --namespace=default --https --url hello-node: exit status 119 (76.84875ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-942000"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-942000"

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-darwin-arm64 -p functional-942000 service --namespace=default --https --url hello-node" : exit status 119
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 service hello-node --url --format={{.IP}}: exit status 119 (79.809125ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-942000"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-942000"

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-942000 service hello-node --url --format={{.IP}}": exit status 119
functional_test.go:1547: "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-942000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 service hello-node --url: exit status 119 (73.771416ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-942000"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-942000"

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-942000 service hello-node --url": exit status 119
functional_test.go:1564: found endpoint for hello-node: * This control plane is not running! (state=Stopped)
To start a cluster, run: "minikube start -p functional-942000"
functional_test.go:1568: failed to parse "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-942000\"": parse "* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-942000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 119. stderr: I0911 04:04:37.707084    2189 out.go:296] Setting OutFile to fd 1 ...
I0911 04:04:37.707290    2189 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:04:37.707293    2189 out.go:309] Setting ErrFile to fd 2...
I0911 04:04:37.707296    2189 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:04:37.707413    2189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:04:37.707639    2189 mustload.go:65] Loading cluster: functional-942000
I0911 04:04:37.707822    2189 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:04:37.708489    2189 host.go:66] Checking if "functional-942000" exists ...
I0911 04:04:37.708584    2189 api_server.go:166] Checking apiserver status ...
I0911 04:04:37.708609    2189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0911 04:04:37.708618    2189 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
W0911 04:04:37.742961    2189 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0911 04:04:37.747697    2189 out.go:177] * This control plane is not running! (state=Stopped)
W0911 04:04:37.754672    2189 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p functional-942000"
! This is unusual - you may want to investigate using "minikube logs -p functional-942000"
I0911 04:04:37.762475    2189 out.go:177]   To start a cluster, run: "minikube start -p functional-942000"

                                                
                                                
stdout: * This control plane is not running! (state=Stopped)
To start a cluster, run: "minikube start -p functional-942000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 2190: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-942000": client config: context "functional-942000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-942000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-942000 get svc nginx-svc: exit status 1 (66.082875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-942000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-942000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (83.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image save gcr.io/google-containers/addon-resizer:functional-942000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image save gcr.io/google-containers/addon-resizer:functional-942000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1m0.141078833s)
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (60.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.041446708s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694430424475301000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694430424475301000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694430424475301000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001/test-1694430424475301000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.855458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 11 11:07 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 11 11:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 11 11:07 test-1694430424475301000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh cat /mount-9p/test-1694430424475301000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-942000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-942000 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (26.209084ms)

                                                
                                                
** stderr ** 
	error: context "functional-942000" does not exist

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-942000 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (70.745917ms)

                                                
                                                
-- stdout --
	192.168.105.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=1000,access=any,msize=65536,trans=tcp,noextend,port=49571)
	total 2
	-rw-r--r-- 1 docker docker 24 Sep 11 11:07 created-by-test
	-rw-r--r-- 1 docker docker 24 Sep 11 11:07 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Sep 11 11:07 test-1694430424475301000
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-darwin-arm64 -p functional-942000 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.105.1:49571
* Userspace file server: ufs starting
* Successfully mounted /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001:/mount-9p --alsologtostderr -v=1] stderr:
I0911 04:07:04.503269    2279 out.go:296] Setting OutFile to fd 1 ...
I0911 04:07:04.503426    2279 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:07:04.503429    2279 out.go:309] Setting ErrFile to fd 2...
I0911 04:07:04.503431    2279 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:07:04.503546    2279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:07:04.503733    2279 mustload.go:65] Loading cluster: functional-942000
I0911 04:07:04.503912    2279 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:07:04.504565    2279 host.go:66] Checking if "functional-942000" exists ...
I0911 04:07:04.509235    2279 out.go:177] * Mounting host path /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001 into VM as /mount-9p ...
I0911 04:07:04.513274    2279 out.go:177]   - Mount type:   9p
I0911 04:07:04.516183    2279 out.go:177]   - User ID:      docker
I0911 04:07:04.520264    2279 out.go:177]   - Group ID:     docker
I0911 04:07:04.524301    2279 out.go:177]   - Version:      9p2000.L
I0911 04:07:04.527190    2279 out.go:177]   - Message Size: 262144
I0911 04:07:04.530277    2279 out.go:177]   - Options:      map[]
I0911 04:07:04.533300    2279 out.go:177]   - Bind Address: 192.168.105.1:49571
I0911 04:07:04.536241    2279 out.go:177] * Userspace file server: 
I0911 04:07:04.536383    2279 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0911 04:07:04.539334    2279 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
I0911 04:07:04.571071    2279 mount.go:180] unmount for /mount-9p ran successfully
I0911 04:07:04.571106    2279 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0911 04:07:04.574128    2279 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=49571,trans=tcp,version=9p2000.L 192.168.105.1 /mount-9p"
I0911 04:07:04.578405    2279 main.go:125] stdlog: ufs.go:141 connected
I0911 04:07:04.579540    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tversion tag 65535 msize 65536 version '9P2000.L'
I0911 04:07:04.579567    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rversion tag 65535 msize 65536 version '9P2000'
I0911 04:07:04.579803    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0911 04:07:04.579855    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rattach tag 0 aqid (427a023 83ec6d9b 'd')
I0911 04:07:04.580159    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 0
I0911 04:07:04.581284    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('001' 'jenkins' '20' '' q (427a023 83ec6d9b 'd') m d755 at 0 mt 1694430424 l 160 t 0 d 0 ext )
I0911 04:07:04.581713    2279 lock.go:50] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/functional-942000/.mount-process: {Name:mk4b3f6e9fadb6ce39e66f355ca6a4a7ea1deab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0911 04:07:04.581913    2279 mount.go:105] mount successful: ""
I0911 04:07:04.585279    2279 out.go:177] * Successfully mounted /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port229014619/001 to /mount-9p
I0911 04:07:04.588240    2279 out.go:177] 
I0911 04:07:04.591247    2279 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0911 04:07:05.341886    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 0
I0911 04:07:05.342534    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('001' 'jenkins' '20' '' q (427a023 83ec6d9b 'd') m d755 at 0 mt 1694430424 l 160 t 0 d 0 ext )
I0911 04:07:05.414843    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 0
I0911 04:07:05.415254    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('001' 'jenkins' '20' '' q (427a023 83ec6d9b 'd') m d755 at 0 mt 1694430424 l 160 t 0 d 0 ext )
I0911 04:07:05.415850    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 1 
I0911 04:07:05.415869    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 
I0911 04:07:05.416067    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Topen tag 0 fid 1 mode 0
I0911 04:07:05.416099    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Ropen tag 0 qid (427a023 83ec6d9b 'd') iounit 0
I0911 04:07:05.416299    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 0
I0911 04:07:05.416582    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('001' 'jenkins' '20' '' q (427a023 83ec6d9b 'd') m d755 at 0 mt 1694430424 l 160 t 0 d 0 ext )
I0911 04:07:05.416797    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 0 count 65512
I0911 04:07:05.417597    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 243
I0911 04:07:05.417824    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 243 count 65269
I0911 04:07:05.417842    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.418168    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 243 count 65512
I0911 04:07:05.418185    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.418421    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 2 0:'test-1694430424475301000' 
I0911 04:07:05.418442    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 (427a026 83ec6d9b '') 
I0911 04:07:05.418673    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.418916    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('test-1694430424475301000' 'jenkins' '20' '' q (427a026 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.419116    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.419359    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('test-1694430424475301000' 'jenkins' '20' '' q (427a026 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.419581    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 2
I0911 04:07:05.419590    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.419814    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0911 04:07:05.419837    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 (427a025 83ec6d9b '') 
I0911 04:07:05.420035    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.420276    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' '20' '' q (427a025 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.420462    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.420721    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' '20' '' q (427a025 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.421081    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 2
I0911 04:07:05.421093    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.421386    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0911 04:07:05.421408    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 (427a024 83ec6d9b '') 
I0911 04:07:05.421641    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.421891    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test' 'jenkins' '20' '' q (427a024 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.422392    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.422618    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test' 'jenkins' '20' '' q (427a024 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.422836    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 2
I0911 04:07:05.422845    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.423037    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 243 count 65512
I0911 04:07:05.423050    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.423216    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 1
I0911 04:07:05.423226    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.491053    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 1 0:'test-1694430424475301000' 
I0911 04:07:05.491104    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 (427a026 83ec6d9b '') 
I0911 04:07:05.491345    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 1
I0911 04:07:05.491697    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('test-1694430424475301000' 'jenkins' '20' '' q (427a026 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.491929    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 1 newfid 2 
I0911 04:07:05.491945    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 
I0911 04:07:05.492167    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Topen tag 0 fid 2 mode 0
I0911 04:07:05.492200    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Ropen tag 0 qid (427a026 83ec6d9b '') iounit 0
I0911 04:07:05.492421    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 1
I0911 04:07:05.492752    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('test-1694430424475301000' 'jenkins' '20' '' q (427a026 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.493016    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 2 offset 0 count 65512
I0911 04:07:05.493041    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 24
I0911 04:07:05.493244    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 2 offset 24 count 65512
I0911 04:07:05.493260    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.493463    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 2 offset 24 count 65512
I0911 04:07:05.493478    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.493754    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 2
I0911 04:07:05.493766    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.493961    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 1
I0911 04:07:05.493973    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.587874    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 0
I0911 04:07:05.588311    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('001' 'jenkins' '20' '' q (427a023 83ec6d9b 'd') m d755 at 0 mt 1694430424 l 160 t 0 d 0 ext )
I0911 04:07:05.588832    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 1 
I0911 04:07:05.588857    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 
I0911 04:07:05.589039    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Topen tag 0 fid 1 mode 0
I0911 04:07:05.589081    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Ropen tag 0 qid (427a023 83ec6d9b 'd') iounit 0
I0911 04:07:05.589249    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 0
I0911 04:07:05.589514    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('001' 'jenkins' '20' '' q (427a023 83ec6d9b 'd') m d755 at 0 mt 1694430424 l 160 t 0 d 0 ext )
I0911 04:07:05.589734    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 0 count 65512
I0911 04:07:05.590512    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 243
I0911 04:07:05.590756    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 243 count 65269
I0911 04:07:05.590771    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.590974    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 243 count 65512
I0911 04:07:05.590989    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.591163    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 2 0:'test-1694430424475301000' 
I0911 04:07:05.591183    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 (427a026 83ec6d9b '') 
I0911 04:07:05.591370    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.591618    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('test-1694430424475301000' 'jenkins' '20' '' q (427a026 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.591817    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.592063    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('test-1694430424475301000' 'jenkins' '20' '' q (427a026 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.592233    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 2
I0911 04:07:05.592245    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.592493    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0911 04:07:05.592515    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 (427a025 83ec6d9b '') 
I0911 04:07:05.592823    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.593079    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' '20' '' q (427a025 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.593409    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.593641    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' '20' '' q (427a025 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.593924    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 2
I0911 04:07:05.593932    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.594192    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0911 04:07:05.594214    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rwalk tag 0 (427a024 83ec6d9b '') 
I0911 04:07:05.594505    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.594751    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test' 'jenkins' '20' '' q (427a024 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.594961    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tstat tag 0 fid 2
I0911 04:07:05.595195    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rstat tag 0 st ('created-by-test' 'jenkins' '20' '' q (427a024 83ec6d9b '') m 644 at 0 mt 1694430424 l 24 t 0 d 0 ext )
I0911 04:07:05.595371    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 2
I0911 04:07:05.595382    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.595537    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tread tag 0 fid 1 offset 243 count 65512
I0911 04:07:05.595551    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rread tag 0 count 0
I0911 04:07:05.595710    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 1
I0911 04:07:05.595719    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.596387    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0911 04:07:05.596409    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rerror tag 0 ename 'file not found' ecode 0
I0911 04:07:05.660956    2279 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.105.4:60418 Tclunk tag 0 fid 0
I0911 04:07:05.660975    2279 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.105.4:60418 Rclunk tag 0
I0911 04:07:05.661201    2279 main.go:125] stdlog: ufs.go:147 disconnected
I0911 04:07:05.679736    2279 out.go:177] * Unmounting /mount-9p ...
I0911 04:07:05.683627    2279 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0911 04:07:05.686113    2279 mount.go:180] unmount for /mount-9p ran successfully
I0911 04:07:05.686189    2279 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/functional-942000/.mount-process: {Name:mk4b3f6e9fadb6ce39e66f355ca6a4a7ea1deab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0911 04:07:05.690675    2279 out.go:177] 
W0911 04:07:05.694659    2279 out.go:239] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0911 04:07:05.697666    2279 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0911 04:08:36.956435    2377 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:08:36.956700    2377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:08:36.956705    2377 out.go:309] Setting ErrFile to fd 2...
	I0911 04:08:36.956708    2377 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:08:36.956860    2377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:08:36.957400    2377 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:08:36.957485    2377 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:08:36.958616    2377 ssh_runner.go:195] Run: systemctl --version
	I0911 04:08:36.958627    2377 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/id_rsa Username:docker}
	I0911 04:08:37.000517    2377 cache_images.go:286] Loading image from: /Users/jenkins/workspace/addon-resizer-save.tar
	W0911 04:08:37.000552    2377 cache_images.go:254] Failed to load cached images for profile functional-942000. make sure the profile is running. loading images: stat /Users/jenkins/workspace/addon-resizer-save.tar: no such file or directory
	I0911 04:08:37.000560    2377 cache_images.go:262] succeeded pushing to: 
	I0911 04:08:37.000562    2377 cache_images.go:263] failed pushing to: functional-942000

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-094000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-094000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in efb99dbabc6c
	Removing intermediate container efb99dbabc6c
	 ---> 828f84dcdae6
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 6a00d2d27d4a
	Removing intermediate container 6a00d2d27d4a
	 ---> 759aaf545c43
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 681f225bd08d
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-094000 -n image-094000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-094000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-942000 ssh findmnt                            | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | -T /mount1                                               |                   |         |         |                     |                     |
	| ssh            | functional-942000 ssh findmnt                            | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | -T /mount1                                               |                   |         |         |                     |                     |
	| ssh            | functional-942000 ssh findmnt                            | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | -T /mount1                                               |                   |         |         |                     |                     |
	| ssh            | functional-942000 ssh findmnt                            | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | -T /mount1                                               |                   |         |         |                     |                     |
	| start          | -p functional-942000                                     | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | --dry-run --memory                                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                                           |                   |         |         |                     |                     |
	| start          | -p functional-942000 --dry-run                           | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | --alsologtostderr -v=1                                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                                           |                   |         |         |                     |                     |
	| start          | -p functional-942000                                     | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | --dry-run --memory                                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                       | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT |                     |
	|                | -p functional-942000                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                   |         |         |                     |                     |
	| update-context | functional-942000                                        | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT | 11 Sep 23 04:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-942000                                        | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT | 11 Sep 23 04:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-942000                                        | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT | 11 Sep 23 04:07 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-942000 image ls                               | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT | 11 Sep 23 04:08 PDT |
	| image          | functional-942000 image load                             | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:08 PDT | 11 Sep 23 04:08 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-942000 image save --daemon                    | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:08 PDT | 11 Sep 23 04:09 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-942000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-942000                                        | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT | 11 Sep 23 04:10 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-942000                                        | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT | 11 Sep 23 04:09 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-942000 ssh pgrep                              | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-942000 image build -t                         | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT |                     |
	|                | localhost/my-image:functional-942000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-942000 image ls                               | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:10 PDT | 11 Sep 23 04:11 PDT |
	| image          | functional-942000                                        | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:10 PDT |                     |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-942000                                        | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:11 PDT | 11 Sep 23 04:12 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| delete         | -p functional-942000                                     | functional-942000 | jenkins | v1.31.2 | 11 Sep 23 04:12 PDT | 11 Sep 23 04:12 PDT |
	| start          | -p image-094000 --driver=qemu2                           | image-094000      | jenkins | v1.31.2 | 11 Sep 23 04:12 PDT | 11 Sep 23 04:13 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-094000      | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-094000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-094000      | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-094000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 04:12:37
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 04:12:37.908552    2446 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:12:37.908664    2446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:37.908666    2446 out.go:309] Setting ErrFile to fd 2...
	I0911 04:12:37.908667    2446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:37.908784    2446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:12:37.909766    2446 out.go:303] Setting JSON to false
	I0911 04:12:37.924878    2446 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2531,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:12:37.924949    2446 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:12:37.929421    2446 out.go:177] * [image-094000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:12:37.937347    2446 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:12:37.937406    2446 notify.go:220] Checking for updates...
	I0911 04:12:37.941405    2446 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:12:37.944312    2446 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:12:37.947319    2446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:12:37.950333    2446 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:12:37.951501    2446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:12:37.954485    2446 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:12:37.958340    2446 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:12:37.963317    2446 start.go:298] selected driver: qemu2
	I0911 04:12:37.963321    2446 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:12:37.963331    2446 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:12:37.963399    2446 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:12:37.966329    2446 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:12:37.971166    2446 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0911 04:12:37.971257    2446 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 04:12:37.971273    2446 cni.go:84] Creating CNI manager for ""
	I0911 04:12:37.971279    2446 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:12:37.971283    2446 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:12:37.971289    2446 start_flags.go:321] config:
	{Name:image-094000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:image-094000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:12:37.975225    2446 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:12:37.982344    2446 out.go:177] * Starting control plane node image-094000 in cluster image-094000
	I0911 04:12:37.986303    2446 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:12:37.986322    2446 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:12:37.986338    2446 cache.go:57] Caching tarball of preloaded images
	I0911 04:12:37.986395    2446 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:12:37.986398    2446 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:12:37.986582    2446 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/config.json ...
	I0911 04:12:37.986592    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/config.json: {Name:mkceb5fbdf6894ecd90bac02ddf8a11467109664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:37.986784    2446 start.go:365] acquiring machines lock for image-094000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:37.986810    2446 start.go:369] acquired machines lock for "image-094000" in 22.5µs
	I0911 04:12:37.986819    2446 start.go:93] Provisioning new machine with config: &{Name:image-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:image-094000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:37.986841    2446 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:37.995268    2446 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0911 04:12:38.010682    2446 start.go:159] libmachine.API.Create for "image-094000" (driver="qemu2")
	I0911 04:12:38.010704    2446 client.go:168] LocalClient.Create starting
	I0911 04:12:38.010755    2446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:12:38.010776    2446 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:38.010786    2446 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:38.010820    2446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:12:38.010836    2446 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:38.010849    2446 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:38.011183    2446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:38.260355    2446 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:38.322229    2446 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:38.322235    2446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:38.322378    2446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/disk.qcow2
	I0911 04:12:38.330768    2446 main.go:141] libmachine: STDOUT: 
	I0911 04:12:38.330777    2446 main.go:141] libmachine: STDERR: 
	I0911 04:12:38.330823    2446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/disk.qcow2 +20000M
	I0911 04:12:38.337882    2446 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:38.337889    2446 main.go:141] libmachine: STDERR: 
	I0911 04:12:38.337902    2446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/disk.qcow2
	I0911 04:12:38.337907    2446 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:38.337942    2446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:79:d1:4b:1f:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/disk.qcow2
	I0911 04:12:38.371519    2446 main.go:141] libmachine: STDOUT: 
	I0911 04:12:38.371533    2446 main.go:141] libmachine: STDERR: 
	I0911 04:12:38.371535    2446 main.go:141] libmachine: Attempt 0
	I0911 04:12:38.371544    2446 main.go:141] libmachine: Searching for 4a:79:d1:4b:1f:20 in /var/db/dhcpd_leases ...
	I0911 04:12:38.371604    2446 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 04:12:38.371621    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:12:38.371626    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:12:38.371630    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:12:40.373739    2446 main.go:141] libmachine: Attempt 1
	I0911 04:12:40.373783    2446 main.go:141] libmachine: Searching for 4a:79:d1:4b:1f:20 in /var/db/dhcpd_leases ...
	I0911 04:12:40.374210    2446 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 04:12:40.374252    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:12:40.374279    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:12:40.374307    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:12:42.376412    2446 main.go:141] libmachine: Attempt 2
	I0911 04:12:42.376427    2446 main.go:141] libmachine: Searching for 4a:79:d1:4b:1f:20 in /var/db/dhcpd_leases ...
	I0911 04:12:42.376540    2446 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 04:12:42.376560    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:12:42.376564    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:12:42.376584    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:12:44.378618    2446 main.go:141] libmachine: Attempt 3
	I0911 04:12:44.378657    2446 main.go:141] libmachine: Searching for 4a:79:d1:4b:1f:20 in /var/db/dhcpd_leases ...
	I0911 04:12:44.378733    2446 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 04:12:44.378738    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:12:44.378755    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:12:44.378759    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:12:46.380732    2446 main.go:141] libmachine: Attempt 4
	I0911 04:12:46.380736    2446 main.go:141] libmachine: Searching for 4a:79:d1:4b:1f:20 in /var/db/dhcpd_leases ...
	I0911 04:12:46.380777    2446 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 04:12:46.380782    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:12:46.380787    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:12:46.380791    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:12:48.382775    2446 main.go:141] libmachine: Attempt 5
	I0911 04:12:48.382788    2446 main.go:141] libmachine: Searching for 4a:79:d1:4b:1f:20 in /var/db/dhcpd_leases ...
	I0911 04:12:48.382887    2446 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 04:12:48.382895    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:12:48.382899    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:12:48.382903    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:12:50.384933    2446 main.go:141] libmachine: Attempt 6
	I0911 04:12:50.384949    2446 main.go:141] libmachine: Searching for 4a:79:d1:4b:1f:20 in /var/db/dhcpd_leases ...
	I0911 04:12:50.385060    2446 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 04:12:50.385071    2446 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:79:d1:4b:1f:20 ID:1,4a:79:d1:4b:1f:20 Lease:0x650047b1}
	I0911 04:12:50.385074    2446 main.go:141] libmachine: Found match: 4a:79:d1:4b:1f:20
	I0911 04:12:50.385086    2446 main.go:141] libmachine: IP: 192.168.105.5
	I0911 04:12:50.385089    2446 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0911 04:12:52.402201    2446 machine.go:88] provisioning docker machine ...
	I0911 04:12:52.402244    2446 buildroot.go:166] provisioning hostname "image-094000"
	I0911 04:12:52.402378    2446 main.go:141] libmachine: Using SSH client type: native
	I0911 04:12:52.403259    2446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e123b0] 0x100e14e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 04:12:52.403276    2446 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-094000 && echo "image-094000" | sudo tee /etc/hostname
	I0911 04:12:52.498757    2446 main.go:141] libmachine: SSH cmd err, output: <nil>: image-094000
	
	I0911 04:12:52.498880    2446 main.go:141] libmachine: Using SSH client type: native
	I0911 04:12:52.499370    2446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e123b0] 0x100e14e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 04:12:52.499383    2446 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-094000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-094000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-094000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 04:12:52.578132    2446 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 04:12:52.578150    2446 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17225-951/.minikube CaCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17225-951/.minikube}
	I0911 04:12:52.578175    2446 buildroot.go:174] setting up certificates
	I0911 04:12:52.578182    2446 provision.go:83] configureAuth start
	I0911 04:12:52.578186    2446 provision.go:138] copyHostCerts
	I0911 04:12:52.578292    2446 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem, removing ...
	I0911 04:12:52.578298    2446 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem
	I0911 04:12:52.578494    2446 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem (1078 bytes)
	I0911 04:12:52.578729    2446 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem, removing ...
	I0911 04:12:52.578731    2446 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem
	I0911 04:12:52.578804    2446 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem (1123 bytes)
	I0911 04:12:52.578952    2446 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem, removing ...
	I0911 04:12:52.578954    2446 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem
	I0911 04:12:52.579018    2446 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem (1675 bytes)
	I0911 04:12:52.579142    2446 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem org=jenkins.image-094000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-094000]
	I0911 04:12:52.742297    2446 provision.go:172] copyRemoteCerts
	I0911 04:12:52.742331    2446 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 04:12:52.742341    2446 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/id_rsa Username:docker}
	I0911 04:12:52.778127    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 04:12:52.784952    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0911 04:12:52.791815    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 04:12:52.799045    2446 provision.go:86] duration metric: configureAuth took 220.856416ms
	I0911 04:12:52.799051    2446 buildroot.go:189] setting minikube options for container-runtime
	I0911 04:12:52.799169    2446 config.go:182] Loaded profile config "image-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:12:52.799208    2446 main.go:141] libmachine: Using SSH client type: native
	I0911 04:12:52.799421    2446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e123b0] 0x100e14e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 04:12:52.799424    2446 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 04:12:52.865600    2446 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 04:12:52.865608    2446 buildroot.go:70] root file system type: tmpfs
	I0911 04:12:52.865667    2446 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 04:12:52.865711    2446 main.go:141] libmachine: Using SSH client type: native
	I0911 04:12:52.865949    2446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e123b0] 0x100e14e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 04:12:52.865985    2446 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 04:12:52.933518    2446 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 04:12:52.933567    2446 main.go:141] libmachine: Using SSH client type: native
	I0911 04:12:52.933826    2446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e123b0] 0x100e14e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 04:12:52.933834    2446 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 04:12:53.258057    2446 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0911 04:12:53.258067    2446 machine.go:91] provisioned docker machine in 855.874291ms
	I0911 04:12:53.258071    2446 client.go:171] LocalClient.Create took 15.247719708s
	I0911 04:12:53.258085    2446 start.go:167] duration metric: libmachine.API.Create for "image-094000" took 15.247760917s
	I0911 04:12:53.258089    2446 start.go:300] post-start starting for "image-094000" (driver="qemu2")
	I0911 04:12:53.258093    2446 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 04:12:53.258163    2446 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 04:12:53.258170    2446 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/id_rsa Username:docker}
	I0911 04:12:53.295374    2446 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 04:12:53.296762    2446 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 04:12:53.296770    2446 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/addons for local assets ...
	I0911 04:12:53.296848    2446 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/files for local assets ...
	I0911 04:12:53.296961    2446 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem -> 13932.pem in /etc/ssl/certs
	I0911 04:12:53.297066    2446 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 04:12:53.300023    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem --> /etc/ssl/certs/13932.pem (1708 bytes)
	I0911 04:12:53.307452    2446 start.go:303] post-start completed in 49.359875ms
	I0911 04:12:53.307803    2446 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/config.json ...
	I0911 04:12:53.307947    2446 start.go:128] duration metric: createHost completed in 15.321461s
	I0911 04:12:53.307972    2446 main.go:141] libmachine: Using SSH client type: native
	I0911 04:12:53.308191    2446 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e123b0] 0x100e14e10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 04:12:53.308194    2446 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 04:12:53.372360    2446 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694430773.597784669
	
	I0911 04:12:53.372365    2446 fix.go:206] guest clock: 1694430773.597784669
	I0911 04:12:53.372368    2446 fix.go:219] Guest: 2023-09-11 04:12:53.597784669 -0700 PDT Remote: 2023-09-11 04:12:53.307948 -0700 PDT m=+15.418790209 (delta=289.836669ms)
	I0911 04:12:53.372377    2446 fix.go:190] guest clock delta is within tolerance: 289.836669ms
	I0911 04:12:53.372379    2446 start.go:83] releasing machines lock for "image-094000", held for 15.385924333s
	I0911 04:12:53.372715    2446 ssh_runner.go:195] Run: cat /version.json
	I0911 04:12:53.372715    2446 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 04:12:53.372721    2446 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/id_rsa Username:docker}
	I0911 04:12:53.372736    2446 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/id_rsa Username:docker}
	I0911 04:12:53.404693    2446 ssh_runner.go:195] Run: systemctl --version
	I0911 04:12:53.407318    2446 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 04:12:53.449217    2446 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 04:12:53.449256    2446 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 04:12:53.454449    2446 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 04:12:53.454454    2446 start.go:466] detecting cgroup driver to use...
	I0911 04:12:53.454522    2446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 04:12:53.460355    2446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0911 04:12:53.463885    2446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 04:12:53.467495    2446 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 04:12:53.467515    2446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 04:12:53.471322    2446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 04:12:53.474783    2446 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 04:12:53.477660    2446 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 04:12:53.480402    2446 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 04:12:53.483906    2446 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 04:12:53.487495    2446 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 04:12:53.490686    2446 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 04:12:53.493565    2446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:12:53.554403    2446 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 04:12:53.562775    2446 start.go:466] detecting cgroup driver to use...
	I0911 04:12:53.562831    2446 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 04:12:53.568422    2446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 04:12:53.573282    2446 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 04:12:53.579354    2446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 04:12:53.584365    2446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 04:12:53.589416    2446 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0911 04:12:53.635736    2446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 04:12:53.641339    2446 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 04:12:53.646909    2446 ssh_runner.go:195] Run: which cri-dockerd
	I0911 04:12:53.648483    2446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 04:12:53.651572    2446 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 04:12:53.656703    2446 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 04:12:53.717980    2446 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 04:12:53.777922    2446 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 04:12:53.777931    2446 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 04:12:53.783282    2446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:12:53.840544    2446 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 04:12:54.991877    2446 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.151347s)
	I0911 04:12:54.991935    2446 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 04:12:55.054760    2446 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0911 04:12:55.119444    2446 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 04:12:55.182717    2446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:12:55.240354    2446 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0911 04:12:55.248293    2446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:12:55.318868    2446 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0911 04:12:55.344031    2446 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0911 04:12:55.344099    2446 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0911 04:12:55.346116    2446 start.go:534] Will wait 60s for crictl version
	I0911 04:12:55.346152    2446 ssh_runner.go:195] Run: which crictl
	I0911 04:12:55.347631    2446 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 04:12:55.363193    2446 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0911 04:12:55.363251    2446 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 04:12:55.372964    2446 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 04:12:55.394273    2446 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0911 04:12:55.394425    2446 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 04:12:55.395885    2446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 04:12:55.400076    2446 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:12:55.400119    2446 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 04:12:55.405519    2446 docker.go:636] Got preloaded images: 
	I0911 04:12:55.405523    2446 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0911 04:12:55.405562    2446 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 04:12:55.408456    2446 ssh_runner.go:195] Run: which lz4
	I0911 04:12:55.409952    2446 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 04:12:55.411124    2446 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 04:12:55.411134    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0911 04:12:56.738270    2446 docker.go:600] Took 1.328399 seconds to copy over tarball
	I0911 04:12:56.738322    2446 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 04:12:57.772007    2446 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.033699167s)
	I0911 04:12:57.772016    2446 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 04:12:57.788220    2446 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 04:12:57.791614    2446 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0911 04:12:57.796787    2446 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:12:57.858209    2446 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 04:12:59.486419    2446 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.628230125s)
	I0911 04:12:59.486501    2446 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 04:12:59.492647    2446 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0911 04:12:59.492653    2446 cache_images.go:84] Images are preloaded, skipping loading
	I0911 04:12:59.492719    2446 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 04:12:59.500252    2446 cni.go:84] Creating CNI manager for ""
	I0911 04:12:59.500257    2446 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:12:59.500267    2446 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 04:12:59.500275    2446 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-094000 NodeName:image-094000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 04:12:59.500701    2446 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-094000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 04:12:59.500746    2446 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-094000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:image-094000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 04:12:59.500821    2446 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 04:12:59.504120    2446 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 04:12:59.504160    2446 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 04:12:59.507354    2446 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0911 04:12:59.512491    2446 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 04:12:59.517617    2446 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0911 04:12:59.522595    2446 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0911 04:12:59.523996    2446 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 04:12:59.527985    2446 certs.go:56] Setting up /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000 for IP: 192.168.105.5
	I0911 04:12:59.527994    2446 certs.go:190] acquiring lock for shared ca certs: {Name:mkb829580b94fbef660a72f5d00b6f296afd6da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:59.528127    2446 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key
	I0911 04:12:59.528163    2446 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key
	I0911 04:12:59.528190    2446 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/client.key
	I0911 04:12:59.528197    2446 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/client.crt with IP's: []
	I0911 04:12:59.743798    2446 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/client.crt ...
	I0911 04:12:59.743803    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/client.crt: {Name:mkb8f11affe0d967aaa6f8e1a126cd294283f9f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:59.744146    2446 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/client.key ...
	I0911 04:12:59.744149    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/client.key: {Name:mk3d7f9a13b7e0f8458116873ed9923fdb881fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:59.744278    2446 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.key.e69b33ca
	I0911 04:12:59.744284    2446 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 04:12:59.798381    2446 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.crt.e69b33ca ...
	I0911 04:12:59.798383    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.crt.e69b33ca: {Name:mkd789af7c0ccd909100bc5b8d2247ad980d32f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:59.798520    2446 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.key.e69b33ca ...
	I0911 04:12:59.798522    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.key.e69b33ca: {Name:mk016f7608595464eb229274440cec2ddf8cd1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:59.798622    2446 certs.go:337] copying /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.crt
	I0911 04:12:59.798824    2446 certs.go:341] copying /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.key
	I0911 04:12:59.798943    2446 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.key
	I0911 04:12:59.798949    2446 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.crt with IP's: []
	I0911 04:12:59.942195    2446 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.crt ...
	I0911 04:12:59.942199    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.crt: {Name:mkcd33e6b4eb5284ea76cf911858524bf0cc0c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:59.942412    2446 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.key ...
	I0911 04:12:59.942414    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.key: {Name:mk88c9317e0a9f2cf87c3440b206b36bbe475436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:59.942654    2446 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/1393.pem (1338 bytes)
	W0911 04:12:59.942685    2446 certs.go:433] ignoring /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/1393_empty.pem, impossibly tiny 0 bytes
	I0911 04:12:59.942690    2446 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 04:12:59.942711    2446 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem (1078 bytes)
	I0911 04:12:59.942728    2446 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem (1123 bytes)
	I0911 04:12:59.942744    2446 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem (1675 bytes)
	I0911 04:12:59.942783    2446 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem (1708 bytes)
	I0911 04:12:59.943084    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 04:12:59.951092    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 04:12:59.958722    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 04:12:59.965888    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/image-094000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 04:12:59.972368    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 04:12:59.979501    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 04:12:59.987019    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 04:12:59.994282    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 04:13:00.001158    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 04:13:00.007925    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/certs/1393.pem --> /usr/share/ca-certificates/1393.pem (1338 bytes)
	I0911 04:13:00.015329    2446 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem --> /usr/share/ca-certificates/13932.pem (1708 bytes)
	I0911 04:13:00.023023    2446 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 04:13:00.028545    2446 ssh_runner.go:195] Run: openssl version
	I0911 04:13:00.030664    2446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 04:13:00.033792    2446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 04:13:00.035325    2446 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0911 04:13:00.035345    2446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 04:13:00.037481    2446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 04:13:00.040545    2446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1393.pem && ln -fs /usr/share/ca-certificates/1393.pem /etc/ssl/certs/1393.pem"
	I0911 04:13:00.043947    2446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1393.pem
	I0911 04:13:00.045556    2446 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 10:35 /usr/share/ca-certificates/1393.pem
	I0911 04:13:00.045575    2446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1393.pem
	I0911 04:13:00.047314    2446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1393.pem /etc/ssl/certs/51391683.0"
	I0911 04:13:00.050548    2446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13932.pem && ln -fs /usr/share/ca-certificates/13932.pem /etc/ssl/certs/13932.pem"
	I0911 04:13:00.053676    2446 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13932.pem
	I0911 04:13:00.055224    2446 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 10:35 /usr/share/ca-certificates/13932.pem
	I0911 04:13:00.055242    2446 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13932.pem
	I0911 04:13:00.057403    2446 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13932.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 04:13:00.060370    2446 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 04:13:00.061868    2446 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 04:13:00.061895    2446 kubeadm.go:404] StartCluster: {Name:image-094000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:image-094000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:00.061963    2446 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 04:13:00.067381    2446 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 04:13:00.070682    2446 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 04:13:00.073456    2446 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 04:13:00.076283    2446 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 04:13:00.076294    2446 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 04:13:00.099164    2446 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 04:13:00.099194    2446 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 04:13:00.150966    2446 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 04:13:00.151015    2446 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 04:13:00.151053    2446 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 04:13:00.209379    2446 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 04:13:00.216589    2446 out.go:204]   - Generating certificates and keys ...
	I0911 04:13:00.216624    2446 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 04:13:00.216663    2446 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 04:13:00.316473    2446 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 04:13:00.413096    2446 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 04:13:00.530719    2446 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 04:13:00.723552    2446 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 04:13:00.862384    2446 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 04:13:00.862455    2446 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-094000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0911 04:13:00.993804    2446 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 04:13:00.993876    2446 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-094000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0911 04:13:01.067941    2446 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 04:13:01.107080    2446 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 04:13:01.177235    2446 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 04:13:01.177263    2446 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 04:13:01.237251    2446 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 04:13:01.409110    2446 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 04:13:01.523491    2446 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 04:13:01.780839    2446 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 04:13:01.781063    2446 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 04:13:01.782177    2446 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 04:13:01.790551    2446 out.go:204]   - Booting up control plane ...
	I0911 04:13:01.790629    2446 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 04:13:01.790687    2446 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 04:13:01.790716    2446 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 04:13:01.790776    2446 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 04:13:01.790855    2446 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 04:13:01.790960    2446 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 04:13:01.863034    2446 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 04:13:05.944531    2446 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001769 seconds
	I0911 04:13:05.944591    2446 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 04:13:05.950084    2446 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 04:13:06.461427    2446 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 04:13:06.461546    2446 kubeadm.go:322] [mark-control-plane] Marking the node image-094000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 04:13:06.969325    2446 kubeadm.go:322] [bootstrap-token] Using token: 3fc28q.r52585dmbk3d76de
	I0911 04:13:06.974565    2446 out.go:204]   - Configuring RBAC rules ...
	I0911 04:13:06.974638    2446 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 04:13:06.976559    2446 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 04:13:06.983289    2446 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 04:13:06.984508    2446 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 04:13:06.985721    2446 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 04:13:06.986935    2446 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 04:13:06.991265    2446 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 04:13:07.174833    2446 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 04:13:07.379343    2446 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 04:13:07.379676    2446 kubeadm.go:322] 
	I0911 04:13:07.379701    2446 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 04:13:07.379703    2446 kubeadm.go:322] 
	I0911 04:13:07.379735    2446 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 04:13:07.379736    2446 kubeadm.go:322] 
	I0911 04:13:07.379750    2446 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 04:13:07.379778    2446 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 04:13:07.379800    2446 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 04:13:07.379801    2446 kubeadm.go:322] 
	I0911 04:13:07.379828    2446 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 04:13:07.379829    2446 kubeadm.go:322] 
	I0911 04:13:07.379859    2446 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 04:13:07.379860    2446 kubeadm.go:322] 
	I0911 04:13:07.379888    2446 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 04:13:07.379928    2446 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 04:13:07.379968    2446 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 04:13:07.379971    2446 kubeadm.go:322] 
	I0911 04:13:07.380017    2446 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 04:13:07.380058    2446 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 04:13:07.380061    2446 kubeadm.go:322] 
	I0911 04:13:07.380101    2446 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3fc28q.r52585dmbk3d76de \
	I0911 04:13:07.380162    2446 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fefaa3365accf94cefbb66337f5f2e8a6ced437eccd2cdfbf367c2be71bce2cf \
	I0911 04:13:07.380173    2446 kubeadm.go:322] 	--control-plane 
	I0911 04:13:07.380174    2446 kubeadm.go:322] 
	I0911 04:13:07.380216    2446 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 04:13:07.380218    2446 kubeadm.go:322] 
	I0911 04:13:07.380261    2446 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3fc28q.r52585dmbk3d76de \
	I0911 04:13:07.380310    2446 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fefaa3365accf94cefbb66337f5f2e8a6ced437eccd2cdfbf367c2be71bce2cf 
	I0911 04:13:07.380399    2446 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 04:13:07.380406    2446 cni.go:84] Creating CNI manager for ""
	I0911 04:13:07.380412    2446 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:13:07.389552    2446 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 04:13:07.392624    2446 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 04:13:07.395581    2446 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 04:13:07.400239    2446 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 04:13:07.400283    2446 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:07.400299    2446 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=c0ed13cc972769b226a536a2831a80a40376f282 minikube.k8s.io/name=image-094000 minikube.k8s.io/updated_at=2023_09_11T04_13_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:07.468194    2446 kubeadm.go:1081] duration metric: took 67.952125ms to wait for elevateKubeSystemPrivileges.
	I0911 04:13:07.468215    2446 ops.go:34] apiserver oom_adj: -16
	I0911 04:13:07.468218    2446 kubeadm.go:406] StartCluster complete in 7.326890125s
	I0911 04:13:07.468228    2446 settings.go:142] acquiring lock: {Name:mkc25efdeb235bb06c8f15f7bc2dab1fff3cf449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:07.468309    2446 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:13:07.468650    2446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/kubeconfig: {Name:mk9102949afcf8989652bad8d36d55e289cc75c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:07.468812    2446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 04:13:07.468859    2446 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 04:13:07.468896    2446 addons.go:69] Setting storage-provisioner=true in profile "image-094000"
	I0911 04:13:07.468902    2446 addons.go:231] Setting addon storage-provisioner=true in "image-094000"
	I0911 04:13:07.468903    2446 addons.go:69] Setting default-storageclass=true in profile "image-094000"
	I0911 04:13:07.468909    2446 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-094000"
	I0911 04:13:07.468926    2446 host.go:66] Checking if "image-094000" exists ...
	I0911 04:13:07.468930    2446 config.go:182] Loaded profile config "image-094000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:13:07.473659    2446 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:13:07.475322    2446 addons.go:231] Setting addon default-storageclass=true in "image-094000"
	I0911 04:13:07.477545    2446 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 04:13:07.477549    2446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 04:13:07.477555    2446 host.go:66] Checking if "image-094000" exists ...
	I0911 04:13:07.477556    2446 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/id_rsa Username:docker}
	I0911 04:13:07.478299    2446 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 04:13:07.478301    2446 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 04:13:07.478304    2446 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/image-094000/id_rsa Username:docker}
	I0911 04:13:07.479825    2446 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-094000" context rescaled to 1 replicas
	I0911 04:13:07.479839    2446 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:07.487499    2446 out.go:177] * Verifying Kubernetes components...
	I0911 04:13:07.491579    2446 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 04:13:07.518444    2446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 04:13:07.519576    2446 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 04:13:07.519755    2446 api_server.go:52] waiting for apiserver process to appear ...
	I0911 04:13:07.519778    2446 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 04:13:07.562358    2446 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 04:13:07.960590    2446 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0911 04:13:07.960600    2446 api_server.go:72] duration metric: took 480.828125ms to wait for apiserver process to appear ...
	I0911 04:13:07.960605    2446 api_server.go:88] waiting for apiserver healthz status ...
	I0911 04:13:07.960612    2446 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0911 04:13:07.964189    2446 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0911 04:13:07.964979    2446 api_server.go:141] control plane version: v1.28.1
	I0911 04:13:07.964983    2446 api_server.go:131] duration metric: took 4.376667ms to wait for apiserver health ...
	I0911 04:13:07.964987    2446 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 04:13:07.967947    2446 system_pods.go:59] 4 kube-system pods found
	I0911 04:13:07.967953    2446 system_pods.go:61] "etcd-image-094000" [a2142a44-e6dd-4064-9614-e7f2b7005c99] Pending
	I0911 04:13:07.967955    2446 system_pods.go:61] "kube-apiserver-image-094000" [0d92fee5-1bcb-4e1a-85a8-ed9bd6c3e64d] Pending
	I0911 04:13:07.967957    2446 system_pods.go:61] "kube-controller-manager-image-094000" [7015524b-2bf7-4389-9aa2-baf802375f35] Pending
	I0911 04:13:07.967959    2446 system_pods.go:61] "kube-scheduler-image-094000" [c3d25298-c447-42e0-af5b-05eb17f5b7cc] Pending
	I0911 04:13:07.967960    2446 system_pods.go:74] duration metric: took 2.972417ms to wait for pod list to return data ...
	I0911 04:13:07.967964    2446 kubeadm.go:581] duration metric: took 488.194959ms to wait for : map[apiserver:true system_pods:true] ...
	I0911 04:13:07.967969    2446 node_conditions.go:102] verifying NodePressure condition ...
	I0911 04:13:07.969464    2446 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 04:13:07.969469    2446 node_conditions.go:123] node cpu capacity is 2
	I0911 04:13:07.969475    2446 node_conditions.go:105] duration metric: took 1.504375ms to run NodePressure ...
	I0911 04:13:07.969479    2446 start.go:228] waiting for startup goroutines ...
	I0911 04:13:08.059385    2446 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0911 04:13:08.067183    2446 addons.go:502] enable addons completed in 598.421333ms: enabled=[default-storageclass storage-provisioner]
	I0911 04:13:08.067194    2446 start.go:233] waiting for cluster config update ...
	I0911 04:13:08.067198    2446 start.go:242] writing updated cluster config ...
	I0911 04:13:08.067480    2446 ssh_runner.go:195] Run: rm -f paused
	I0911 04:13:08.096051    2446 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0911 04:13:08.100397    2446 out.go:177] * Done! kubectl is now configured to use "image-094000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-11 11:12:49 UTC, ends at Mon 2023-09-11 11:13:09 UTC. --
	Sep 11 11:13:03 image-094000 cri-dockerd[1062]: time="2023-09-11T11:13:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9ca46461e419831eae18931ea9f468c3600b0ff0f8203e4400c507e5c2c40631/resolv.conf as [nameserver 192.168.105.1]"
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.147184590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.147298756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.147326006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.147349465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.157596506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.157686298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.157714715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.157737798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:03 image-094000 cri-dockerd[1062]: time="2023-09-11T11:13:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c077b61896a75b038ddc16564ba36d27d66c25d10d2e225134c11e1326ba9e42/resolv.conf as [nameserver 192.168.105.1]"
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.257274882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.257402257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.257432590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 11:13:03 image-094000 dockerd[1168]: time="2023-09-11T11:13:03.257470757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:09 image-094000 dockerd[1162]: time="2023-09-11T11:13:09.280749801Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 11 11:13:09 image-094000 dockerd[1162]: time="2023-09-11T11:13:09.404514843Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 11 11:13:09 image-094000 dockerd[1162]: time="2023-09-11T11:13:09.419842259Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 11 11:13:09 image-094000 dockerd[1168]: time="2023-09-11T11:13:09.453080926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 11:13:09 image-094000 dockerd[1168]: time="2023-09-11T11:13:09.453109051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:09 image-094000 dockerd[1168]: time="2023-09-11T11:13:09.453120593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 11:13:09 image-094000 dockerd[1168]: time="2023-09-11T11:13:09.453126843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:13:09 image-094000 dockerd[1162]: time="2023-09-11T11:13:09.588887843Z" level=info msg="ignoring event" container=681f225bd08deb6ae528d4113f7c06081688d357f0ba37b23aca7bc9e93e47ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:13:09 image-094000 dockerd[1168]: time="2023-09-11T11:13:09.588940593Z" level=info msg="shim disconnected" id=681f225bd08deb6ae528d4113f7c06081688d357f0ba37b23aca7bc9e93e47ee namespace=moby
	Sep 11 11:13:09 image-094000 dockerd[1168]: time="2023-09-11T11:13:09.588974343Z" level=warning msg="cleaning up after shim disconnected" id=681f225bd08deb6ae528d4113f7c06081688d357f0ba37b23aca7bc9e93e47ee namespace=moby
	Sep 11 11:13:09 image-094000 dockerd[1168]: time="2023-09-11T11:13:09.588978885Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5f2a0e14e5685       9cdd6470f48c8       6 seconds ago       Running             etcd                      0                   c077b61896a75
	52e569f9e41d7       b4a5a57e99492       6 seconds ago       Running             kube-scheduler            0                   9ca46461e4198
	18c2597b62fa5       b29fb62480892       6 seconds ago       Running             kube-apiserver            0                   e42e6c4578035
	3ce4b0b01a6fc       8b6e1980b7584       6 seconds ago       Running             kube-controller-manager   0                   69525448c57e5
	
	* 
	* ==> describe nodes <==
	* Name:               image-094000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-094000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0ed13cc972769b226a536a2831a80a40376f282
	                    minikube.k8s.io/name=image-094000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T04_13_07_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:13:04 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-094000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:13:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:13:07 +0000   Mon, 11 Sep 2023 11:13:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:13:07 +0000   Mon, 11 Sep 2023 11:13:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:13:07 +0000   Mon, 11 Sep 2023 11:13:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 11 Sep 2023 11:13:07 +0000   Mon, 11 Sep 2023 11:13:03 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-094000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdf454ca2ea043e79bd94ea3863709bd
	  System UUID:                cdf454ca2ea043e79bd94ea3863709bd
	  Boot ID:                    4fa70365-d4cf-48ec-8100-c91138e1a3db
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-094000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-094000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-094000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-094000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-094000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-094000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-094000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep11 11:12] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.646051] EINJ: EINJ table not found.
	[  +0.511437] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043581] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000797] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.229047] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.060677] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.426743] systemd-fstab-generator[756]: Ignoring "noauto" for root device
	[  +0.165917] systemd-fstab-generator[795]: Ignoring "noauto" for root device
	[  +0.059514] systemd-fstab-generator[806]: Ignoring "noauto" for root device
	[  +0.061764] systemd-fstab-generator[819]: Ignoring "noauto" for root device
	[  +1.213073] systemd-fstab-generator[977]: Ignoring "noauto" for root device
	[  +0.064051] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +0.066202] systemd-fstab-generator[999]: Ignoring "noauto" for root device
	[  +0.055775] systemd-fstab-generator[1010]: Ignoring "noauto" for root device
	[  +0.079105] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.538201] systemd-fstab-generator[1155]: Ignoring "noauto" for root device
	[  +1.612477] kauditd_printk_skb: 53 callbacks suppressed
	[Sep11 11:13] systemd-fstab-generator[1482]: Ignoring "noauto" for root device
	[  +5.135660] systemd-fstab-generator[2350]: Ignoring "noauto" for root device
	[  +2.261272] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [5f2a0e14e568] <==
	* {"level":"info","ts":"2023-09-11T11:13:03.333384Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-11T11:13:03.333412Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-11T11:13:03.333699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-11T11:13:03.333755Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-11T11:13:03.333844Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:13:03.333886Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:13:03.333905Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:13:04.129671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-11T11:13:04.12981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-11T11:13:04.129845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-11T11:13:04.129866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:13:04.129874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-11T11:13:04.129887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-11T11:13:04.129898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-11T11:13:04.130971Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:13:04.131858Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-094000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:13:04.132178Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:13:04.13241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:13:04.132446Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:13:04.132266Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:13:04.132678Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:13:04.132887Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:13:04.132291Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:13:04.134306Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-11T11:13:04.134796Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:13:10 up 0 min,  0 users,  load average: 0.59, 0.13, 0.04
	Linux image-094000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [18c2597b62fa] <==
	* I0911 11:13:04.848615       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:13:04.849026       1 controller.go:624] quota admission added evaluator for: namespaces
	I0911 11:13:04.849048       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 11:13:04.849051       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 11:13:04.849213       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0911 11:13:04.851025       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0911 11:13:04.865686       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:13:04.865726       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:13:04.865736       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:13:04.865755       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:13:04.865758       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:13:04.875033       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:13:05.751701       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0911 11:13:05.753752       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0911 11:13:05.753760       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:13:05.892543       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:13:05.902170       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 11:13:05.953326       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0911 11:13:05.955640       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0911 11:13:05.956057       1 controller.go:624] quota admission added evaluator for: endpoints
	I0911 11:13:05.960019       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:13:06.783562       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 11:13:07.316096       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 11:13:07.320302       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0911 11:13:07.323572       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [3ce4b0b01a6f] <==
	* I0911 11:13:03.385006       1 serving.go:348] Generated self-signed cert in-memory
	I0911 11:13:03.956156       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0911 11:13:03.956173       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:13:03.956747       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0911 11:13:03.956832       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0911 11:13:03.957039       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0911 11:13:03.957119       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 11:13:06.779676       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0911 11:13:06.784960       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0911 11:13:06.785052       1 stateful_set.go:161] "Starting stateful set controller"
	I0911 11:13:06.785058       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0911 11:13:06.880159       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [52e569f9e41d] <==
	* W0911 11:13:04.822615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:13:04.822623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 11:13:04.822667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:13:04.822682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0911 11:13:04.822745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:13:04.822750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0911 11:13:04.822783       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:13:04.822791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 11:13:04.822823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:13:04.822859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 11:13:04.822899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:13:04.822907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0911 11:13:04.822955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:13:04.822963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 11:13:04.822983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:13:04.822991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0911 11:13:04.823021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 11:13:04.823029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 11:13:04.823043       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0911 11:13:04.823064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0911 11:13:04.823100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 11:13:04.823107       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 11:13:04.823394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:13:04.823423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0911 11:13:06.021695       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:12:49 UTC, ends at Mon 2023-09-11 11:13:10 UTC. --
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.460076    2370 kubelet_node_status.go:70] "Attempting to register node" node="image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.464364    2370 topology_manager.go:215] "Topology Admit Handler" podUID="d1193a6a3676fdd94a084ec2c8b70fdc" podNamespace="kube-system" podName="kube-controller-manager-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.464464    2370 topology_manager.go:215] "Topology Admit Handler" podUID="f427acca56bd6a43ae86b7bee933b072" podNamespace="kube-system" podName="kube-scheduler-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.464485    2370 topology_manager.go:215] "Topology Admit Handler" podUID="08c891a10117fe32f3efbcf882a4b1cb" podNamespace="kube-system" podName="etcd-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.464510    2370 topology_manager.go:215] "Topology Admit Handler" podUID="84caddfcd8316b28d812796478a44c66" podNamespace="kube-system" podName="kube-apiserver-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.465124    2370 kubelet_node_status.go:108] "Node was previously registered" node="image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.465156    2370 kubelet_node_status.go:73] "Successfully registered node" node="image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.661948    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1193a6a3676fdd94a084ec2c8b70fdc-ca-certs\") pod \"kube-controller-manager-image-094000\" (UID: \"d1193a6a3676fdd94a084ec2c8b70fdc\") " pod="kube-system/kube-controller-manager-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.661993    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1193a6a3676fdd94a084ec2c8b70fdc-k8s-certs\") pod \"kube-controller-manager-image-094000\" (UID: \"d1193a6a3676fdd94a084ec2c8b70fdc\") " pod="kube-system/kube-controller-manager-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662004    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1193a6a3676fdd94a084ec2c8b70fdc-kubeconfig\") pod \"kube-controller-manager-image-094000\" (UID: \"d1193a6a3676fdd94a084ec2c8b70fdc\") " pod="kube-system/kube-controller-manager-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662014    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f427acca56bd6a43ae86b7bee933b072-kubeconfig\") pod \"kube-scheduler-image-094000\" (UID: \"f427acca56bd6a43ae86b7bee933b072\") " pod="kube-system/kube-scheduler-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662028    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84caddfcd8316b28d812796478a44c66-ca-certs\") pod \"kube-apiserver-image-094000\" (UID: \"84caddfcd8316b28d812796478a44c66\") " pod="kube-system/kube-apiserver-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662041    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1193a6a3676fdd94a084ec2c8b70fdc-flexvolume-dir\") pod \"kube-controller-manager-image-094000\" (UID: \"d1193a6a3676fdd94a084ec2c8b70fdc\") " pod="kube-system/kube-controller-manager-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662064    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1193a6a3676fdd94a084ec2c8b70fdc-usr-share-ca-certificates\") pod \"kube-controller-manager-image-094000\" (UID: \"d1193a6a3676fdd94a084ec2c8b70fdc\") " pod="kube-system/kube-controller-manager-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662076    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/08c891a10117fe32f3efbcf882a4b1cb-etcd-certs\") pod \"etcd-image-094000\" (UID: \"08c891a10117fe32f3efbcf882a4b1cb\") " pod="kube-system/etcd-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662085    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/08c891a10117fe32f3efbcf882a4b1cb-etcd-data\") pod \"etcd-image-094000\" (UID: \"08c891a10117fe32f3efbcf882a4b1cb\") " pod="kube-system/etcd-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662095    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84caddfcd8316b28d812796478a44c66-k8s-certs\") pod \"kube-apiserver-image-094000\" (UID: \"84caddfcd8316b28d812796478a44c66\") " pod="kube-system/kube-apiserver-image-094000"
	Sep 11 11:13:07 image-094000 kubelet[2370]: I0911 11:13:07.662104    2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84caddfcd8316b28d812796478a44c66-usr-share-ca-certificates\") pod \"kube-apiserver-image-094000\" (UID: \"84caddfcd8316b28d812796478a44c66\") " pod="kube-system/kube-apiserver-image-094000"
	Sep 11 11:13:08 image-094000 kubelet[2370]: I0911 11:13:08.346524    2370 apiserver.go:52] "Watching apiserver"
	Sep 11 11:13:08 image-094000 kubelet[2370]: I0911 11:13:08.361213    2370 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 11 11:13:08 image-094000 kubelet[2370]: E0911 11:13:08.416312    2370 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-094000\" already exists" pod="kube-system/kube-apiserver-image-094000"
	Sep 11 11:13:08 image-094000 kubelet[2370]: I0911 11:13:08.421677    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-094000" podStartSLOduration=1.421650426 podCreationTimestamp="2023-09-11 11:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:13:08.421563426 +0000 UTC m=+1.118416501" watchObservedRunningTime="2023-09-11 11:13:08.421650426 +0000 UTC m=+1.118503460"
	Sep 11 11:13:08 image-094000 kubelet[2370]: I0911 11:13:08.425289    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-094000" podStartSLOduration=1.425274092 podCreationTimestamp="2023-09-11 11:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:13:08.425198634 +0000 UTC m=+1.122051710" watchObservedRunningTime="2023-09-11 11:13:08.425274092 +0000 UTC m=+1.122127168"
	Sep 11 11:13:08 image-094000 kubelet[2370]: I0911 11:13:08.431758    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-094000" podStartSLOduration=1.431735217 podCreationTimestamp="2023-09-11 11:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:13:08.428201467 +0000 UTC m=+1.125054543" watchObservedRunningTime="2023-09-11 11:13:08.431735217 +0000 UTC m=+1.128588293"
	Sep 11 11:13:08 image-094000 kubelet[2370]: I0911 11:13:08.435847    2370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-094000" podStartSLOduration=1.435826259 podCreationTimestamp="2023-09-11 11:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:13:08.431838467 +0000 UTC m=+1.128691501" watchObservedRunningTime="2023-09-11 11:13:08.435826259 +0000 UTC m=+1.132679335"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-094000 -n image-094000
helpers_test.go:261: (dbg) Run:  kubectl --context image-094000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-094000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-094000 describe pod storage-provisioner: exit status 1 (42.242375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-094000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (57.08s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-131000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-131000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.58467s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-131000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-131000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ffa2e0f8-42c0-4e50-8753-f9444f9bb1d2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ffa2e0f8-42c0-4e50-8753-f9444f9bb1d2] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.0176215s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-131000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.040314708s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 addons disable ingress-dns --alsologtostderr -v=1: (6.111140584s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 addons disable ingress --alsologtostderr -v=1: (7.096057042s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-131000 -n ingress-addon-legacy-131000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-942000                                        | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT | 11 Sep 23 04:07 PDT |
	|                | update-context                                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                             |         |         |                     |                     |
	| image          | functional-942000 image ls                               | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:07 PDT | 11 Sep 23 04:08 PDT |
	| image          | functional-942000 image load                             | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:08 PDT | 11 Sep 23 04:08 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-942000 image save --daemon                    | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:08 PDT | 11 Sep 23 04:09 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-942000 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-942000                                        | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT | 11 Sep 23 04:10 PDT |
	|                | image ls --format yaml                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-942000                                        | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT | 11 Sep 23 04:09 PDT |
	|                | image ls --format short                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh            | functional-942000 ssh pgrep                              | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT |                     |
	|                | buildkitd                                                |                             |         |         |                     |                     |
	| image          | functional-942000 image build -t                         | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:09 PDT |                     |
	|                | localhost/my-image:functional-942000                     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image          | functional-942000 image ls                               | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:10 PDT | 11 Sep 23 04:11 PDT |
	| image          | functional-942000                                        | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:10 PDT |                     |
	|                | image ls --format json                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image          | functional-942000                                        | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:11 PDT | 11 Sep 23 04:12 PDT |
	|                | image ls --format table                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                        |                             |         |         |                     |                     |
	| delete         | -p functional-942000                                     | functional-942000           | jenkins | v1.31.2 | 11 Sep 23 04:12 PDT | 11 Sep 23 04:12 PDT |
	| start          | -p image-094000 --driver=qemu2                           | image-094000                | jenkins | v1.31.2 | 11 Sep 23 04:12 PDT | 11 Sep 23 04:13 PDT |
	|                |                                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-094000                | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	|                | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|                | -p image-094000                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-094000                | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|                | image-094000                                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-094000                | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	|                | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|                | image-094000                                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-094000                | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	|                | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|                | -p image-094000                                          |                             |         |         |                     |                     |
	| delete         | -p image-094000                                          | image-094000                | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	| start          | -p ingress-addon-legacy-131000                           | ingress-addon-legacy-131000 | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:14 PDT |
	|                | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-131000                              | ingress-addon-legacy-131000 | jenkins | v1.31.2 | 11 Sep 23 04:14 PDT | 11 Sep 23 04:14 PDT |
	|                | addons enable ingress                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-131000                              | ingress-addon-legacy-131000 | jenkins | v1.31.2 | 11 Sep 23 04:14 PDT | 11 Sep 23 04:14 PDT |
	|                | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-131000                              | ingress-addon-legacy-131000 | jenkins | v1.31.2 | 11 Sep 23 04:15 PDT | 11 Sep 23 04:15 PDT |
	|                | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-131000 ip                           | ingress-addon-legacy-131000 | jenkins | v1.31.2 | 11 Sep 23 04:15 PDT | 11 Sep 23 04:15 PDT |
	| addons         | ingress-addon-legacy-131000                              | ingress-addon-legacy-131000 | jenkins | v1.31.2 | 11 Sep 23 04:15 PDT | 11 Sep 23 04:15 PDT |
	|                | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-131000                              | ingress-addon-legacy-131000 | jenkins | v1.31.2 | 11 Sep 23 04:15 PDT | 11 Sep 23 04:15 PDT |
	|                | addons disable ingress                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 04:13:10
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 04:13:10.623182    2493 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:13:10.623292    2493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:10.623295    2493 out.go:309] Setting ErrFile to fd 2...
	I0911 04:13:10.623297    2493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:10.623402    2493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:13:10.624373    2493 out.go:303] Setting JSON to false
	I0911 04:13:10.639597    2493 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2564,"bootTime":1694428226,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:13:10.639662    2493 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:13:10.644064    2493 out.go:177] * [ingress-addon-legacy-131000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:13:10.647184    2493 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:13:10.647330    2493 notify.go:220] Checking for updates...
	I0911 04:13:10.654096    2493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:13:10.657167    2493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:13:10.660143    2493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:13:10.663156    2493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:13:10.666137    2493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:13:10.669399    2493 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:13:10.673104    2493 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:13:10.680232    2493 start.go:298] selected driver: qemu2
	I0911 04:13:10.680237    2493 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:13:10.680251    2493 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:13:10.682266    2493 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:13:10.685174    2493 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:13:10.688210    2493 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:13:10.688235    2493 cni.go:84] Creating CNI manager for ""
	I0911 04:13:10.688242    2493 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:13:10.688246    2493 start_flags.go:321] config:
	{Name:ingress-addon-legacy-131000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-131000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:10.692454    2493 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:13:10.699097    2493 out.go:177] * Starting control plane node ingress-addon-legacy-131000 in cluster ingress-addon-legacy-131000
	I0911 04:13:10.703209    2493 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0911 04:13:10.755792    2493 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0911 04:13:10.756167    2493 cache.go:57] Caching tarball of preloaded images
	I0911 04:13:10.756370    2493 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0911 04:13:10.759215    2493 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0911 04:13:10.767189    2493 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0911 04:13:10.840877    2493 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0911 04:13:18.110665    2493 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0911 04:13:18.110798    2493 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0911 04:13:18.858962    2493 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0911 04:13:18.859139    2493 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/config.json ...
	I0911 04:13:18.859160    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/config.json: {Name:mkd56b2579d884a43b2f52e41806d0b7def27c52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:18.859405    2493 start.go:365] acquiring machines lock for ingress-addon-legacy-131000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:18.859433    2493 start.go:369] acquired machines lock for "ingress-addon-legacy-131000" in 21.375µs
	I0911 04:13:18.859443    2493 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-131000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-131000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:18.859478    2493 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:18.864500    2493 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0911 04:13:18.878888    2493 start.go:159] libmachine.API.Create for "ingress-addon-legacy-131000" (driver="qemu2")
	I0911 04:13:18.878908    2493 client.go:168] LocalClient.Create starting
	I0911 04:13:18.878988    2493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:13:18.879013    2493 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:18.879026    2493 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:18.879067    2493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:13:18.879084    2493 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:18.879091    2493 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:18.879391    2493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:19.379047    2493 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:19.484029    2493 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:19.484035    2493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:19.484172    2493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/disk.qcow2
	I0911 04:13:19.492844    2493 main.go:141] libmachine: STDOUT: 
	I0911 04:13:19.492858    2493 main.go:141] libmachine: STDERR: 
	I0911 04:13:19.492922    2493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/disk.qcow2 +20000M
	I0911 04:13:19.500033    2493 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:19.500050    2493 main.go:141] libmachine: STDERR: 
	I0911 04:13:19.500068    2493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/disk.qcow2
	I0911 04:13:19.500081    2493 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:19.500116    2493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:f8:72:60:2e:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/disk.qcow2
	I0911 04:13:19.533432    2493 main.go:141] libmachine: STDOUT: 
	I0911 04:13:19.533512    2493 main.go:141] libmachine: STDERR: 
	I0911 04:13:19.533516    2493 main.go:141] libmachine: Attempt 0
	I0911 04:13:19.533538    2493 main.go:141] libmachine: Searching for 86:f8:72:60:2e:11 in /var/db/dhcpd_leases ...
	I0911 04:13:19.533604    2493 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 04:13:19.533624    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:79:d1:4b:1f:20 ID:1,4a:79:d1:4b:1f:20 Lease:0x650047b1}
	I0911 04:13:19.533632    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:13:19.533637    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:13:19.533643    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:13:21.535646    2493 main.go:141] libmachine: Attempt 1
	I0911 04:13:21.535729    2493 main.go:141] libmachine: Searching for 86:f8:72:60:2e:11 in /var/db/dhcpd_leases ...
	I0911 04:13:21.536148    2493 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 04:13:21.536200    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:79:d1:4b:1f:20 ID:1,4a:79:d1:4b:1f:20 Lease:0x650047b1}
	I0911 04:13:21.536263    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:13:21.536297    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:13:21.536331    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:13:23.538365    2493 main.go:141] libmachine: Attempt 2
	I0911 04:13:23.538404    2493 main.go:141] libmachine: Searching for 86:f8:72:60:2e:11 in /var/db/dhcpd_leases ...
	I0911 04:13:23.538508    2493 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 04:13:23.538522    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:79:d1:4b:1f:20 ID:1,4a:79:d1:4b:1f:20 Lease:0x650047b1}
	I0911 04:13:23.538535    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:13:23.538541    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:13:23.538546    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:13:25.540470    2493 main.go:141] libmachine: Attempt 3
	I0911 04:13:25.540483    2493 main.go:141] libmachine: Searching for 86:f8:72:60:2e:11 in /var/db/dhcpd_leases ...
	I0911 04:13:25.540587    2493 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 04:13:25.540611    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:79:d1:4b:1f:20 ID:1,4a:79:d1:4b:1f:20 Lease:0x650047b1}
	I0911 04:13:25.540620    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:13:25.540624    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:13:25.540631    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:13:27.542582    2493 main.go:141] libmachine: Attempt 4
	I0911 04:13:27.542600    2493 main.go:141] libmachine: Searching for 86:f8:72:60:2e:11 in /var/db/dhcpd_leases ...
	I0911 04:13:27.542684    2493 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 04:13:27.542693    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:79:d1:4b:1f:20 ID:1,4a:79:d1:4b:1f:20 Lease:0x650047b1}
	I0911 04:13:27.542701    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:13:27.542719    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:13:27.542724    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:13:29.544711    2493 main.go:141] libmachine: Attempt 5
	I0911 04:13:29.544727    2493 main.go:141] libmachine: Searching for 86:f8:72:60:2e:11 in /var/db/dhcpd_leases ...
	I0911 04:13:29.544794    2493 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 04:13:29.544804    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:79:d1:4b:1f:20 ID:1,4a:79:d1:4b:1f:20 Lease:0x650047b1}
	I0911 04:13:29.544810    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:3e:be:43:94:58:11 ID:1,3e:be:43:94:58:11 Lease:0x65003eea}
	I0911 04:13:29.544815    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:d6:e2:35:eb:4d:96 ID:1,d6:e2:35:eb:4d:96 Lease:0x64feed5e}
	I0911 04:13:29.544821    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:f2:70:fc:4b:8b:fb ID:1,f2:70:fc:4b:8b:fb Lease:0x65003e9d}
	I0911 04:13:31.546843    2493 main.go:141] libmachine: Attempt 6
	I0911 04:13:31.546880    2493 main.go:141] libmachine: Searching for 86:f8:72:60:2e:11 in /var/db/dhcpd_leases ...
	I0911 04:13:31.547017    2493 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0911 04:13:31.547031    2493 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:86:f8:72:60:2e:11 ID:1,86:f8:72:60:2e:11 Lease:0x650047da}
	I0911 04:13:31.547034    2493 main.go:141] libmachine: Found match: 86:f8:72:60:2e:11
	I0911 04:13:31.547045    2493 main.go:141] libmachine: IP: 192.168.105.6
	I0911 04:13:31.547050    2493 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0911 04:13:32.553162    2493 machine.go:88] provisioning docker machine ...
	I0911 04:13:32.553182    2493 buildroot.go:166] provisioning hostname "ingress-addon-legacy-131000"
	I0911 04:13:32.553223    2493 main.go:141] libmachine: Using SSH client type: native
	I0911 04:13:32.553480    2493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ebe3b0] 0x102ec0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 04:13:32.553490    2493 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-131000 && echo "ingress-addon-legacy-131000" | sudo tee /etc/hostname
	I0911 04:13:32.610869    2493 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-131000
	
	I0911 04:13:32.610931    2493 main.go:141] libmachine: Using SSH client type: native
	I0911 04:13:32.611183    2493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ebe3b0] 0x102ec0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 04:13:32.611195    2493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-131000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-131000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-131000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 04:13:32.669349    2493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 04:13:32.669360    2493 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17225-951/.minikube CaCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17225-951/.minikube}
	I0911 04:13:32.669370    2493 buildroot.go:174] setting up certificates
	I0911 04:13:32.669378    2493 provision.go:83] configureAuth start
	I0911 04:13:32.669383    2493 provision.go:138] copyHostCerts
	I0911 04:13:32.669413    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem
	I0911 04:13:32.669453    2493 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem, removing ...
	I0911 04:13:32.669458    2493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem
	I0911 04:13:32.669591    2493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/ca.pem (1078 bytes)
	I0911 04:13:32.669753    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem
	I0911 04:13:32.669773    2493 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem, removing ...
	I0911 04:13:32.669775    2493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem
	I0911 04:13:32.669822    2493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/cert.pem (1123 bytes)
	I0911 04:13:32.669896    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem
	I0911 04:13:32.669918    2493 exec_runner.go:144] found /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem, removing ...
	I0911 04:13:32.669920    2493 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem
	I0911 04:13:32.669965    2493 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17225-951/.minikube/key.pem (1675 bytes)
	I0911 04:13:32.670036    2493 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-131000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-131000]
	I0911 04:13:32.856377    2493 provision.go:172] copyRemoteCerts
	I0911 04:13:32.856417    2493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 04:13:32.856449    2493 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/id_rsa Username:docker}
	I0911 04:13:32.889134    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 04:13:32.889186    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 04:13:32.895984    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 04:13:32.896030    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0911 04:13:32.903196    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 04:13:32.903238    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 04:13:32.910542    2493 provision.go:86] duration metric: configureAuth took 241.162917ms
	I0911 04:13:32.910551    2493 buildroot.go:189] setting minikube options for container-runtime
	I0911 04:13:32.910658    2493 config.go:182] Loaded profile config "ingress-addon-legacy-131000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0911 04:13:32.910692    2493 main.go:141] libmachine: Using SSH client type: native
	I0911 04:13:32.910907    2493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ebe3b0] 0x102ec0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 04:13:32.910912    2493 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 04:13:32.967979    2493 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 04:13:32.967990    2493 buildroot.go:70] root file system type: tmpfs
	I0911 04:13:32.968053    2493 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 04:13:32.968098    2493 main.go:141] libmachine: Using SSH client type: native
	I0911 04:13:32.968343    2493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ebe3b0] 0x102ec0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 04:13:32.968379    2493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 04:13:33.028100    2493 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 04:13:33.028156    2493 main.go:141] libmachine: Using SSH client type: native
	I0911 04:13:33.028413    2493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ebe3b0] 0x102ec0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 04:13:33.028422    2493 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 04:13:33.382425    2493 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0911 04:13:33.382444    2493 machine.go:91] provisioned docker machine in 829.299792ms
	I0911 04:13:33.382450    2493 client.go:171] LocalClient.Create took 14.504286541s
	I0911 04:13:33.382464    2493 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-131000" took 14.504325792s
	I0911 04:13:33.382469    2493 start.go:300] post-start starting for "ingress-addon-legacy-131000" (driver="qemu2")
	I0911 04:13:33.382474    2493 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 04:13:33.382544    2493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 04:13:33.382558    2493 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/id_rsa Username:docker}
	I0911 04:13:33.411200    2493 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 04:13:33.412500    2493 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 04:13:33.412506    2493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/addons for local assets ...
	I0911 04:13:33.412570    2493 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17225-951/.minikube/files for local assets ...
	I0911 04:13:33.412667    2493 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem -> 13932.pem in /etc/ssl/certs
	I0911 04:13:33.412671    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem -> /etc/ssl/certs/13932.pem
	I0911 04:13:33.412777    2493 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 04:13:33.415222    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem --> /etc/ssl/certs/13932.pem (1708 bytes)
	I0911 04:13:33.422466    2493 start.go:303] post-start completed in 39.993208ms
	I0911 04:13:33.422835    2493 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/config.json ...
	I0911 04:13:33.422994    2493 start.go:128] duration metric: createHost completed in 14.56426375s
	I0911 04:13:33.423033    2493 main.go:141] libmachine: Using SSH client type: native
	I0911 04:13:33.423248    2493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ebe3b0] 0x102ec0e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 04:13:33.423253    2493 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 04:13:33.476564    2493 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694430813.562367835
	
	I0911 04:13:33.476573    2493 fix.go:206] guest clock: 1694430813.562367835
	I0911 04:13:33.476577    2493 fix.go:219] Guest: 2023-09-11 04:13:33.562367835 -0700 PDT Remote: 2023-09-11 04:13:33.423005 -0700 PDT m=+22.821053959 (delta=139.362835ms)
	I0911 04:13:33.476590    2493 fix.go:190] guest clock delta is within tolerance: 139.362835ms
	I0911 04:13:33.476592    2493 start.go:83] releasing machines lock for "ingress-addon-legacy-131000", held for 14.617907584s
	I0911 04:13:33.476919    2493 ssh_runner.go:195] Run: cat /version.json
	I0911 04:13:33.476929    2493 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/id_rsa Username:docker}
	I0911 04:13:33.476957    2493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 04:13:33.476977    2493 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/id_rsa Username:docker}
	I0911 04:13:33.548508    2493 ssh_runner.go:195] Run: systemctl --version
	I0911 04:13:33.550617    2493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 04:13:33.552577    2493 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 04:13:33.552610    2493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0911 04:13:33.555465    2493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0911 04:13:33.560281    2493 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 04:13:33.560288    2493 start.go:466] detecting cgroup driver to use...
	I0911 04:13:33.560355    2493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 04:13:33.567649    2493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0911 04:13:33.571199    2493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 04:13:33.574728    2493 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 04:13:33.574753    2493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 04:13:33.578211    2493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 04:13:33.581080    2493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 04:13:33.584012    2493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 04:13:33.586930    2493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 04:13:33.590370    2493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 04:13:33.593623    2493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 04:13:33.596294    2493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 04:13:33.600058    2493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:13:33.674602    2493 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 04:13:33.683268    2493 start.go:466] detecting cgroup driver to use...
	I0911 04:13:33.683333    2493 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 04:13:33.689298    2493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 04:13:33.694707    2493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 04:13:33.707797    2493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 04:13:33.712301    2493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 04:13:33.716402    2493 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0911 04:13:33.758573    2493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 04:13:33.763891    2493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 04:13:33.769196    2493 ssh_runner.go:195] Run: which cri-dockerd
	I0911 04:13:33.770398    2493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 04:13:33.773131    2493 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 04:13:33.777831    2493 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 04:13:33.852266    2493 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 04:13:33.932520    2493 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 04:13:33.932534    2493 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 04:13:33.937669    2493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:13:34.004299    2493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 04:13:35.169836    2493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.165554834s)
	I0911 04:13:35.169903    2493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 04:13:35.187556    2493 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 04:13:35.208974    2493 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.5 ...
	I0911 04:13:35.209145    2493 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 04:13:35.210618    2493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 04:13:35.214641    2493 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0911 04:13:35.214688    2493 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 04:13:35.219781    2493 docker.go:636] Got preloaded images: 
	I0911 04:13:35.219787    2493 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0911 04:13:35.219822    2493 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 04:13:35.223276    2493 ssh_runner.go:195] Run: which lz4
	I0911 04:13:35.224384    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0911 04:13:35.224472    2493 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 04:13:35.225669    2493 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 04:13:35.225680    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0911 04:13:36.883706    2493 docker.go:600] Took 1.659318 seconds to copy over tarball
	I0911 04:13:36.883767    2493 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 04:13:38.190658    2493 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.30689s)
	I0911 04:13:38.190672    2493 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 04:13:38.215866    2493 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 04:13:38.220454    2493 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0911 04:13:38.227897    2493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 04:13:38.307914    2493 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 04:13:39.827716    2493 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.519818458s)
	I0911 04:13:39.827806    2493 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 04:13:39.833835    2493 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0911 04:13:39.833843    2493 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0911 04:13:39.833847    2493 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 04:13:39.845019    2493 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 04:13:39.845095    2493 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0911 04:13:39.845132    2493 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:13:39.845269    2493 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0911 04:13:39.845379    2493 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 04:13:39.845446    2493 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0911 04:13:39.845493    2493 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 04:13:39.845496    2493 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 04:13:39.855843    2493 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0911 04:13:39.855872    2493 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:13:39.855912    2493 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0911 04:13:39.855943    2493 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 04:13:39.855990    2493 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 04:13:39.856008    2493 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0911 04:13:39.856053    2493 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 04:13:39.856725    2493 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0911 04:13:40.426514    2493 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0911 04:13:40.426636    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0911 04:13:40.437218    2493 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0911 04:13:40.437242    2493 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0911 04:13:40.437297    2493 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0911 04:13:40.442498    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0911 04:13:40.679086    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0911 04:13:40.685396    2493 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0911 04:13:40.685418    2493 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0911 04:13:40.685474    2493 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0911 04:13:40.691582    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0911 04:13:40.868286    2493 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 04:13:40.868402    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0911 04:13:40.874867    2493 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0911 04:13:40.874895    2493 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 04:13:40.874935    2493 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0911 04:13:40.880738    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0911 04:13:40.891042    2493 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0911 04:13:40.891124    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:13:40.897903    2493 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0911 04:13:40.897929    2493 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:13:40.897969    2493 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:13:40.908771    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0911 04:13:41.072297    2493 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 04:13:41.072447    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0911 04:13:41.085129    2493 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0911 04:13:41.085151    2493 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 04:13:41.085197    2493 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0911 04:13:41.091487    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0911 04:13:41.355811    2493 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0911 04:13:41.355939    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0911 04:13:41.362408    2493 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0911 04:13:41.362428    2493 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0911 04:13:41.362487    2493 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0911 04:13:41.368378    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0911 04:13:41.534421    2493 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 04:13:41.534538    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 04:13:41.541028    2493 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0911 04:13:41.541051    2493 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 04:13:41.541095    2493 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 04:13:41.546497    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0911 04:13:41.719815    2493 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 04:13:41.720160    2493 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0911 04:13:41.738383    2493 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0911 04:13:41.738446    2493 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 04:13:41.738557    2493 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0911 04:13:41.751355    2493 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0911 04:13:41.751445    2493 cache_images.go:92] LoadImages completed in 1.917627458s
	W0911 04:13:41.751544    2493 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0911 04:13:41.751648    2493 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 04:13:41.769434    2493 cni.go:84] Creating CNI manager for ""
	I0911 04:13:41.769450    2493 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:13:41.769466    2493 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 04:13:41.769479    2493 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-131000 NodeName:ingress-addon-legacy-131000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 04:13:41.769594    2493 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-131000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 04:13:41.769657    2493 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-131000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-131000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 04:13:41.769724    2493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0911 04:13:41.775168    2493 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 04:13:41.775221    2493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 04:13:41.779154    2493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0911 04:13:41.785695    2493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0911 04:13:41.791248    2493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0911 04:13:41.796858    2493 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0911 04:13:41.798185    2493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 04:13:41.802039    2493 certs.go:56] Setting up /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000 for IP: 192.168.105.6
	I0911 04:13:41.802048    2493 certs.go:190] acquiring lock for shared ca certs: {Name:mkb829580b94fbef660a72f5d00b6f296afd6da6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:41.802200    2493 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key
	I0911 04:13:41.802253    2493 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key
	I0911 04:13:41.802279    2493 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.key
	I0911 04:13:41.802286    2493 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt with IP's: []
	I0911 04:13:41.901927    2493 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt ...
	I0911 04:13:41.901932    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: {Name:mk7534f44f25dd229a415782d4a523bf971c5c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:41.902176    2493 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.key ...
	I0911 04:13:41.902182    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.key: {Name:mk671d42986fe3182dbb5829b3169a92d0bcf0df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:41.902299    2493 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.key.b354f644
	I0911 04:13:41.902307    2493 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 04:13:41.936610    2493 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.crt.b354f644 ...
	I0911 04:13:41.936613    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.crt.b354f644: {Name:mk9f713cf20a86a250f05d32899fe2d4e250a792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:41.936745    2493 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.key.b354f644 ...
	I0911 04:13:41.936748    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.key.b354f644: {Name:mk981a54b374552bf525ee64f3fbdcbceb18b862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:41.936860    2493 certs.go:337] copying /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.crt
	I0911 04:13:41.936951    2493 certs.go:341] copying /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.key
	I0911 04:13:41.937034    2493 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.key
	I0911 04:13:41.937041    2493 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.crt with IP's: []
	I0911 04:13:42.035120    2493 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.crt ...
	I0911 04:13:42.035126    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.crt: {Name:mkd24e3eb797c1d5d6940e094944a95c5aba7c23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:42.035257    2493 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.key ...
	I0911 04:13:42.035259    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.key: {Name:mk8fd3b1c439761e3908e58143432c3ab6961e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:42.035365    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0911 04:13:42.035381    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0911 04:13:42.035397    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0911 04:13:42.035410    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0911 04:13:42.035421    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 04:13:42.035432    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 04:13:42.035443    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 04:13:42.035460    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 04:13:42.035537    2493 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/1393.pem (1338 bytes)
	W0911 04:13:42.035575    2493 certs.go:433] ignoring /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/1393_empty.pem, impossibly tiny 0 bytes
	I0911 04:13:42.035582    2493 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 04:13:42.035602    2493 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem (1078 bytes)
	I0911 04:13:42.035620    2493 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem (1123 bytes)
	I0911 04:13:42.035638    2493 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/Users/jenkins/minikube-integration/17225-951/.minikube/certs/key.pem (1675 bytes)
	I0911 04:13:42.035680    2493 certs.go:437] found cert: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem (1708 bytes)
	I0911 04:13:42.035704    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/certs/1393.pem -> /usr/share/ca-certificates/1393.pem
	I0911 04:13:42.035720    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem -> /usr/share/ca-certificates/13932.pem
	I0911 04:13:42.035750    2493 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 04:13:42.036081    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 04:13:42.043432    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 04:13:42.050805    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 04:13:42.058201    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 04:13:42.065374    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 04:13:42.072402    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 04:13:42.079093    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 04:13:42.086511    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 04:13:42.094131    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/certs/1393.pem --> /usr/share/ca-certificates/1393.pem (1338 bytes)
	I0911 04:13:42.101388    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/ssl/certs/13932.pem --> /usr/share/ca-certificates/13932.pem (1708 bytes)
	I0911 04:13:42.108340    2493 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 04:13:42.115088    2493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 04:13:42.120220    2493 ssh_runner.go:195] Run: openssl version
	I0911 04:13:42.122003    2493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1393.pem && ln -fs /usr/share/ca-certificates/1393.pem /etc/ssl/certs/1393.pem"
	I0911 04:13:42.125022    2493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1393.pem
	I0911 04:13:42.126467    2493 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 10:35 /usr/share/ca-certificates/1393.pem
	I0911 04:13:42.126487    2493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1393.pem
	I0911 04:13:42.128441    2493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1393.pem /etc/ssl/certs/51391683.0"
	I0911 04:13:42.131502    2493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13932.pem && ln -fs /usr/share/ca-certificates/13932.pem /etc/ssl/certs/13932.pem"
	I0911 04:13:42.134986    2493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13932.pem
	I0911 04:13:42.136394    2493 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 10:35 /usr/share/ca-certificates/13932.pem
	I0911 04:13:42.136418    2493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13932.pem
	I0911 04:13:42.138081    2493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13932.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 04:13:42.141169    2493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 04:13:42.144057    2493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 04:13:42.145462    2493 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0911 04:13:42.145486    2493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 04:13:42.147441    2493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 04:13:42.150845    2493 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 04:13:42.152230    2493 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 04:13:42.152258    2493 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-131000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-131000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:42.152337    2493 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 04:13:42.158030    2493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 04:13:42.160959    2493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 04:13:42.163646    2493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 04:13:42.166720    2493 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 04:13:42.166737    2493 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0911 04:13:42.191374    2493 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0911 04:13:42.191437    2493 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 04:13:42.272190    2493 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 04:13:42.272249    2493 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 04:13:42.272295    2493 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 04:13:42.318109    2493 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 04:13:42.319345    2493 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 04:13:42.319366    2493 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 04:13:42.408514    2493 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 04:13:42.419761    2493 out.go:204]   - Generating certificates and keys ...
	I0911 04:13:42.419801    2493 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 04:13:42.419835    2493 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 04:13:42.615925    2493 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 04:13:42.881635    2493 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 04:13:42.970049    2493 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 04:13:43.087965    2493 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 04:13:43.217798    2493 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 04:13:43.217882    2493 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-131000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0911 04:13:43.339224    2493 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 04:13:43.339410    2493 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-131000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0911 04:13:43.427545    2493 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 04:13:43.663355    2493 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 04:13:43.738186    2493 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 04:13:43.738320    2493 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 04:13:43.869089    2493 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 04:13:43.935634    2493 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 04:13:44.127701    2493 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 04:13:44.305601    2493 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 04:13:44.305808    2493 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 04:13:44.310157    2493 out.go:204]   - Booting up control plane ...
	I0911 04:13:44.310211    2493 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 04:13:44.310285    2493 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 04:13:44.310320    2493 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 04:13:44.310362    2493 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 04:13:44.311458    2493 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 04:13:55.816879    2493 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.504842 seconds
	I0911 04:13:55.817048    2493 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 04:13:55.835275    2493 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 04:13:56.362330    2493 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 04:13:56.362584    2493 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-131000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0911 04:13:56.865370    2493 kubeadm.go:322] [bootstrap-token] Using token: ranew1.rg7descoy9oeeolf
	I0911 04:13:56.886301    2493 out.go:204]   - Configuring RBAC rules ...
	I0911 04:13:56.886383    2493 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 04:13:56.886440    2493 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 04:13:56.891692    2493 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 04:13:56.892809    2493 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 04:13:56.893564    2493 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 04:13:56.894370    2493 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 04:13:56.898906    2493 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 04:13:57.129707    2493 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 04:13:57.274539    2493 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 04:13:57.275006    2493 kubeadm.go:322] 
	I0911 04:13:57.275045    2493 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 04:13:57.275049    2493 kubeadm.go:322] 
	I0911 04:13:57.275089    2493 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 04:13:57.275094    2493 kubeadm.go:322] 
	I0911 04:13:57.275107    2493 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 04:13:57.275141    2493 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 04:13:57.275172    2493 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 04:13:57.275175    2493 kubeadm.go:322] 
	I0911 04:13:57.275223    2493 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 04:13:57.275292    2493 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 04:13:57.275329    2493 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 04:13:57.275334    2493 kubeadm.go:322] 
	I0911 04:13:57.275386    2493 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 04:13:57.275432    2493 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 04:13:57.275434    2493 kubeadm.go:322] 
	I0911 04:13:57.275480    2493 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ranew1.rg7descoy9oeeolf \
	I0911 04:13:57.275555    2493 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fefaa3365accf94cefbb66337f5f2e8a6ced437eccd2cdfbf367c2be71bce2cf \
	I0911 04:13:57.275573    2493 kubeadm.go:322]     --control-plane 
	I0911 04:13:57.275576    2493 kubeadm.go:322] 
	I0911 04:13:57.275641    2493 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 04:13:57.275652    2493 kubeadm.go:322] 
	I0911 04:13:57.275698    2493 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ranew1.rg7descoy9oeeolf \
	I0911 04:13:57.275764    2493 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fefaa3365accf94cefbb66337f5f2e8a6ced437eccd2cdfbf367c2be71bce2cf 
	I0911 04:13:57.275863    2493 kubeadm.go:322] W0911 11:13:42.277325    1415 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0911 04:13:57.275977    2493 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0911 04:13:57.276065    2493 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
	I0911 04:13:57.276138    2493 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 04:13:57.276217    2493 kubeadm.go:322] W0911 11:13:44.395136    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 04:13:57.276302    2493 kubeadm.go:322] W0911 11:13:44.395577    1415 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 04:13:57.276310    2493 cni.go:84] Creating CNI manager for ""
	I0911 04:13:57.276318    2493 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:13:57.276328    2493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 04:13:57.276414    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=c0ed13cc972769b226a536a2831a80a40376f282 minikube.k8s.io/name=ingress-addon-legacy-131000 minikube.k8s.io/updated_at=2023_09_11T04_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:57.276417    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:57.279957    2493 ops.go:34] apiserver oom_adj: -16
	I0911 04:13:57.350355    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:57.390398    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:57.927260    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:58.425463    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:58.927226    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:59.427127    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:13:59.927216    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:00.427211    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:00.927198    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:01.427185    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:01.927165    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:02.427105    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:02.927241    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:03.427270    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:03.927294    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:04.427234    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:04.927197    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:05.427193    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:05.926932    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:06.427143    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:06.927132    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:07.427137    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:07.927260    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:08.427141    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:08.927061    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:09.427083    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:09.927208    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:10.427144    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:10.926987    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:11.427023    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:11.927094    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:12.426979    2493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:14:12.472020    2493 kubeadm.go:1081] duration metric: took 15.195752208s to wait for elevateKubeSystemPrivileges.
	I0911 04:14:12.472037    2493 kubeadm.go:406] StartCluster complete in 30.320041125s
	I0911 04:14:12.472046    2493 settings.go:142] acquiring lock: {Name:mkc25efdeb235bb06c8f15f7bc2dab1fff3cf449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:14:12.472135    2493 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:14:12.472556    2493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/kubeconfig: {Name:mk9102949afcf8989652bad8d36d55e289cc75c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:14:12.472785    2493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 04:14:12.472795    2493 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 04:14:12.472841    2493 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-131000"
	I0911 04:14:12.472844    2493 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-131000"
	I0911 04:14:12.472848    2493 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-131000"
	I0911 04:14:12.472850    2493 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-131000"
	I0911 04:14:12.472874    2493 host.go:66] Checking if "ingress-addon-legacy-131000" exists ...
	I0911 04:14:12.473016    2493 kapi.go:59] client config for ingress-addon-legacy-131000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.key", CAFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104279d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 04:14:12.473073    2493 config.go:182] Loaded profile config "ingress-addon-legacy-131000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0911 04:14:12.473420    2493 cert_rotation.go:137] Starting client certificate rotation controller
	I0911 04:14:12.473987    2493 kapi.go:59] client config for ingress-addon-legacy-131000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.key", CAFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104279d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 04:14:12.479346    2493 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:14:12.483326    2493 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 04:14:12.483335    2493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 04:14:12.483343    2493 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/id_rsa Username:docker}
	I0911 04:14:12.487482    2493 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-131000"
	I0911 04:14:12.487500    2493 host.go:66] Checking if "ingress-addon-legacy-131000" exists ...
	I0911 04:14:12.488235    2493 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 04:14:12.488245    2493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 04:14:12.488250    2493 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/ingress-addon-legacy-131000/id_rsa Username:docker}
	I0911 04:14:12.494468    2493 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-131000" context rescaled to 1 replicas
	I0911 04:14:12.494486    2493 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:12.498301    2493 out.go:177] * Verifying Kubernetes components...
	I0911 04:14:12.505398    2493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 04:14:12.520183    2493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 04:14:12.520353    2493 kapi.go:59] client config for ingress-addon-legacy-131000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.key", CAFile:"/Users/jenkins/minikube-integration/17225-951/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104279d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 04:14:12.520491    2493 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-131000" to be "Ready" ...
	I0911 04:14:12.521972    2493 node_ready.go:49] node "ingress-addon-legacy-131000" has status "Ready":"True"
	I0911 04:14:12.521980    2493 node_ready.go:38] duration metric: took 1.477584ms waiting for node "ingress-addon-legacy-131000" to be "Ready" ...
	I0911 04:14:12.521983    2493 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 04:14:12.524808    2493 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:12.525163    2493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 04:14:12.543706    2493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 04:14:12.760186    2493 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0911 04:14:12.779884    2493 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 04:14:12.788884    2493 addons.go:502] enable addons completed in 316.089417ms: enabled=[storage-provisioner default-storageclass]
	I0911 04:14:14.037350    2493 pod_ready.go:92] pod "etcd-ingress-addon-legacy-131000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:14:14.037378    2493 pod_ready.go:81] duration metric: took 1.512564s waiting for pod "etcd-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.037391    2493 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.043158    2493 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-131000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:14:14.043168    2493 pod_ready.go:81] duration metric: took 5.770417ms waiting for pod "kube-apiserver-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.043177    2493 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.047676    2493 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-131000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:14:14.047688    2493 pod_ready.go:81] duration metric: took 4.503542ms waiting for pod "kube-controller-manager-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.047696    2493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z4kk4" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.122580    2493 request.go:629] Waited for 73.124834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-131000
	I0911 04:14:14.322554    2493 request.go:629] Waited for 197.869166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z4kk4
	I0911 04:14:14.522551    2493 request.go:629] Waited for 198.435375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-131000
	I0911 04:14:14.523932    2493 pod_ready.go:92] pod "kube-proxy-z4kk4" in "kube-system" namespace has status "Ready":"True"
	I0911 04:14:14.523944    2493 pod_ready.go:81] duration metric: took 476.244333ms waiting for pod "kube-proxy-z4kk4" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.523949    2493 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.722561    2493 request.go:629] Waited for 198.574709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-131000
	I0911 04:14:14.922629    2493 request.go:629] Waited for 197.720875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-131000
	I0911 04:14:14.930556    2493 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-131000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:14:14.930590    2493 pod_ready.go:81] duration metric: took 406.633458ms waiting for pod "kube-scheduler-ingress-addon-legacy-131000" in "kube-system" namespace to be "Ready" ...
	I0911 04:14:14.930610    2493 pod_ready.go:38] duration metric: took 2.408625334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 04:14:14.930655    2493 api_server.go:52] waiting for apiserver process to appear ...
	I0911 04:14:14.930931    2493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 04:14:14.947537    2493 api_server.go:72] duration metric: took 2.453035167s to wait for apiserver process to appear ...
	I0911 04:14:14.947565    2493 api_server.go:88] waiting for apiserver healthz status ...
	I0911 04:14:14.947591    2493 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0911 04:14:14.958146    2493 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0911 04:14:14.959098    2493 api_server.go:141] control plane version: v1.18.20
	I0911 04:14:14.959112    2493 api_server.go:131] duration metric: took 11.540792ms to wait for apiserver health ...
	I0911 04:14:14.959127    2493 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 04:14:15.122608    2493 request.go:629] Waited for 163.402458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0911 04:14:15.135638    2493 system_pods.go:59] 7 kube-system pods found
	I0911 04:14:15.135680    2493 system_pods.go:61] "coredns-66bff467f8-2k78q" [c4928cdd-c0af-441a-bbe6-cfa396705c20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 04:14:15.135705    2493 system_pods.go:61] "etcd-ingress-addon-legacy-131000" [eddc4ba2-9578-483c-938b-a9724170144a] Running
	I0911 04:14:15.135723    2493 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-131000" [813f2dbd-1017-433a-a18d-a5aab2ce4bbd] Running
	I0911 04:14:15.135732    2493 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-131000" [f2fb8ca7-1e3f-4f45-869a-5825b2b94ea5] Running
	I0911 04:14:15.135743    2493 system_pods.go:61] "kube-proxy-z4kk4" [0afd9d7e-1685-473e-bf47-d0a8e06985ed] Running
	I0911 04:14:15.135756    2493 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-131000" [88756b92-c255-45f5-b754-6eb6656f7225] Running
	I0911 04:14:15.135766    2493 system_pods.go:61] "storage-provisioner" [328b863e-70f9-42b4-adb1-6e88a25f2861] Running
	I0911 04:14:15.135774    2493 system_pods.go:74] duration metric: took 176.638834ms to wait for pod list to return data ...
	I0911 04:14:15.135790    2493 default_sa.go:34] waiting for default service account to be created ...
	I0911 04:14:15.322600    2493 request.go:629] Waited for 186.662083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0911 04:14:15.330602    2493 default_sa.go:45] found service account: "default"
	I0911 04:14:15.330640    2493 default_sa.go:55] duration metric: took 194.838833ms for default service account to be created ...
	I0911 04:14:15.330667    2493 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 04:14:15.522653    2493 request.go:629] Waited for 191.848584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0911 04:14:15.533837    2493 system_pods.go:86] 7 kube-system pods found
	I0911 04:14:15.533877    2493 system_pods.go:89] "coredns-66bff467f8-2k78q" [c4928cdd-c0af-441a-bbe6-cfa396705c20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 04:14:15.533894    2493 system_pods.go:89] "etcd-ingress-addon-legacy-131000" [eddc4ba2-9578-483c-938b-a9724170144a] Running
	I0911 04:14:15.533905    2493 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-131000" [813f2dbd-1017-433a-a18d-a5aab2ce4bbd] Running
	I0911 04:14:15.533916    2493 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-131000" [f2fb8ca7-1e3f-4f45-869a-5825b2b94ea5] Running
	I0911 04:14:15.533924    2493 system_pods.go:89] "kube-proxy-z4kk4" [0afd9d7e-1685-473e-bf47-d0a8e06985ed] Running
	I0911 04:14:15.533932    2493 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-131000" [88756b92-c255-45f5-b754-6eb6656f7225] Running
	I0911 04:14:15.533941    2493 system_pods.go:89] "storage-provisioner" [328b863e-70f9-42b4-adb1-6e88a25f2861] Running
	I0911 04:14:15.533975    2493 system_pods.go:126] duration metric: took 203.295208ms to wait for k8s-apps to be running ...
	I0911 04:14:15.533992    2493 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 04:14:15.534200    2493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 04:14:15.550204    2493 system_svc.go:56] duration metric: took 16.206958ms WaitForService to wait for kubelet.
	I0911 04:14:15.550226    2493 kubeadm.go:581] duration metric: took 3.055733084s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 04:14:15.550249    2493 node_conditions.go:102] verifying NodePressure condition ...
	I0911 04:14:15.722614    2493 request.go:629] Waited for 172.280875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0911 04:14:15.727313    2493 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 04:14:15.727340    2493 node_conditions.go:123] node cpu capacity is 2
	I0911 04:14:15.727357    2493 node_conditions.go:105] duration metric: took 177.101708ms to run NodePressure ...
	I0911 04:14:15.727375    2493 start.go:228] waiting for startup goroutines ...
	I0911 04:14:15.727387    2493 start.go:233] waiting for cluster config update ...
	I0911 04:14:15.727410    2493 start.go:242] writing updated cluster config ...
	I0911 04:14:15.728147    2493 ssh_runner.go:195] Run: rm -f paused
	I0911 04:14:15.778471    2493 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0911 04:14:15.782493    2493 out.go:177] 
	W0911 04:14:15.786623    2493 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0911 04:14:15.791472    2493 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0911 04:14:15.798527    2493 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-131000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-11 11:13:30 UTC, ends at Mon 2023-09-11 11:15:29 UTC. --
	Sep 11 11:15:03 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:03.394630437Z" level=info msg="shim disconnected" id=ac1cffd53506fde8caa4c079fe04a462944ab12953e3f95ee05a5c4fe4cfd822 namespace=moby
	Sep 11 11:15:03 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:03.394658310Z" level=warning msg="cleaning up after shim disconnected" id=ac1cffd53506fde8caa4c079fe04a462944ab12953e3f95ee05a5c4fe4cfd822 namespace=moby
	Sep 11 11:15:03 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:03.394662560Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:15:17 ingress-addon-legacy-131000 dockerd[1070]: time="2023-09-11T11:15:17.670308704Z" level=info msg="ignoring event" container=1745901c62f6bde6baa2d38c5cb44051d1d1710bf30269140c6f86aa7e68ee37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:15:17 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:17.671402026Z" level=info msg="shim disconnected" id=1745901c62f6bde6baa2d38c5cb44051d1d1710bf30269140c6f86aa7e68ee37 namespace=moby
	Sep 11 11:15:17 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:17.671459231Z" level=warning msg="cleaning up after shim disconnected" id=1745901c62f6bde6baa2d38c5cb44051d1d1710bf30269140c6f86aa7e68ee37 namespace=moby
	Sep 11 11:15:17 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:17.671467856Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:18.674960815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:18.674998688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:18.675231552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:18.675247884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:18.713580198Z" level=info msg="shim disconnected" id=05215d40293515cad7d93a94aa9160af212af9a71af29f258b43dabf97fbd1b6 namespace=moby
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1070]: time="2023-09-11T11:15:18.713593780Z" level=info msg="ignoring event" container=05215d40293515cad7d93a94aa9160af212af9a71af29f258b43dabf97fbd1b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:18.713840644Z" level=warning msg="cleaning up after shim disconnected" id=05215d40293515cad7d93a94aa9160af212af9a71af29f258b43dabf97fbd1b6 namespace=moby
	Sep 11 11:15:18 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:18.713880309Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1070]: time="2023-09-11T11:15:24.121469909Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=692fbd32f86392d9268e148ecc89d6be6e60fcfe9f86038e25559a40d571421c
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1070]: time="2023-09-11T11:15:24.128019228Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=692fbd32f86392d9268e148ecc89d6be6e60fcfe9f86038e25559a40d571421c
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1070]: time="2023-09-11T11:15:24.205390863Z" level=info msg="ignoring event" container=692fbd32f86392d9268e148ecc89d6be6e60fcfe9f86038e25559a40d571421c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:24.205858053Z" level=info msg="shim disconnected" id=692fbd32f86392d9268e148ecc89d6be6e60fcfe9f86038e25559a40d571421c namespace=moby
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:24.205918634Z" level=warning msg="cleaning up after shim disconnected" id=692fbd32f86392d9268e148ecc89d6be6e60fcfe9f86038e25559a40d571421c namespace=moby
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:24.205928508Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:24.243372251Z" level=info msg="shim disconnected" id=9c082b45b0b5b28acd2b4cc705c8befc5369f19c7e323dd9e1e3b9a2ff93b0d2 namespace=moby
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:24.243423082Z" level=warning msg="cleaning up after shim disconnected" id=9c082b45b0b5b28acd2b4cc705c8befc5369f19c7e323dd9e1e3b9a2ff93b0d2 namespace=moby
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1076]: time="2023-09-11T11:15:24.243429332Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:15:24 ingress-addon-legacy-131000 dockerd[1070]: time="2023-09-11T11:15:24.243529119Z" level=info msg="ignoring event" container=9c082b45b0b5b28acd2b4cc705c8befc5369f19c7e323dd9e1e3b9a2ff93b0d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	05215d4029351       a39a074194753                                                                                                      11 seconds ago       Exited              hello-world-app           2                   b98fcd5edfd99
	a7acabbe1be32       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      35 seconds ago       Running             nginx                     0                   94e93537159ca
	692fbd32f8639       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   58 seconds ago       Exited              controller                0                   9c082b45b0b5b
	a666d13aa680e       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   895ed28dcaba2
	40ba47e240b0e       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   a4e116e0f00fe
	7862dec76d7d0       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   979fef9ec0890
	8322a44bb93a5       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   5dbb180db7366
	46fdbb1e23655       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   66800fb6b951f
	b95348e1d0d03       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   d2b957464db63
	ecb53f892167d       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   ba08dcd3bc986
	c21ec4ed71e7a       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   27b3a05505545
	25f3228b2b8ed       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   0c523bbcb3df1
	
	* 
	* ==> coredns [46fdbb1e2365] <==
	* [INFO] 172.17.0.1:3912 - 17473 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027957s
	[INFO] 172.17.0.1:3912 - 31083 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030498s
	[INFO] 172.17.0.1:3912 - 608 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027165s
	[INFO] 172.17.0.1:3912 - 47241 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035705s
	[INFO] 172.17.0.1:41129 - 28653 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040747s
	[INFO] 172.17.0.1:41129 - 26708 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038039s
	[INFO] 172.17.0.1:41129 - 58538 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001704s
	[INFO] 172.17.0.1:41129 - 13979 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002079s
	[INFO] 172.17.0.1:41129 - 16160 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028331s
	[INFO] 172.17.0.1:41129 - 2642 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014207s
	[INFO] 172.17.0.1:41129 - 47243 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000016999s
	[INFO] 172.17.0.1:37799 - 48135 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051038s
	[INFO] 172.17.0.1:49258 - 31100 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075035s
	[INFO] 172.17.0.1:37799 - 3951 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001479s
	[INFO] 172.17.0.1:37799 - 3035 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045288s
	[INFO] 172.17.0.1:49258 - 23531 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00005612s
	[INFO] 172.17.0.1:37799 - 33982 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009166s
	[INFO] 172.17.0.1:49258 - 11167 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000007999s
	[INFO] 172.17.0.1:37799 - 56813 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008374s
	[INFO] 172.17.0.1:49258 - 64270 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000021248s
	[INFO] 172.17.0.1:37799 - 59752 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008125s
	[INFO] 172.17.0.1:49258 - 22088 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000006499s
	[INFO] 172.17.0.1:37799 - 57093 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013458s
	[INFO] 172.17.0.1:49258 - 13211 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011915s
	[INFO] 172.17.0.1:49258 - 4066 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00002304s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-131000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-131000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0ed13cc972769b226a536a2831a80a40376f282
	                    minikube.k8s.io/name=ingress-addon-legacy-131000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T04_13_57_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-131000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:15:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:15:03 +0000   Mon, 11 Sep 2023 11:13:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:15:03 +0000   Mon, 11 Sep 2023 11:13:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:15:03 +0000   Mon, 11 Sep 2023 11:13:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:15:03 +0000   Mon, 11 Sep 2023 11:14:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-131000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ab5eada066a4d40a67238ff131e1551
	  System UUID:                1ab5eada066a4d40a67238ff131e1551
	  Boot ID:                    42676b3a-ce73-409b-b439-35b604f71368
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-z5m7k                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-2k78q                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     77s
	  kube-system                 etcd-ingress-addon-legacy-131000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-ingress-addon-legacy-131000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-131000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-z4kk4                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-scheduler-ingress-addon-legacy-131000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  98s (x4 over 99s)  kubelet     Node ingress-addon-legacy-131000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x4 over 99s)  kubelet     Node ingress-addon-legacy-131000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x3 over 99s)  kubelet     Node ingress-addon-legacy-131000 status is now: NodeHasSufficientPID
	  Normal  Starting                 86s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  86s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s                kubelet     Node ingress-addon-legacy-131000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet     Node ingress-addon-legacy-131000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet     Node ingress-addon-legacy-131000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                kubelet     Node ingress-addon-legacy-131000 status is now: NodeReady
	  Normal  Starting                 76s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep11 11:13] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.646522] EINJ: EINJ table not found.
	[  +0.519104] systemd-fstab-generator[116]: Ignoring "noauto" for root device
	[  +0.043368] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000806] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.188498] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.081598] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.421450] systemd-fstab-generator[707]: Ignoring "noauto" for root device
	[  +0.177335] systemd-fstab-generator[743]: Ignoring "noauto" for root device
	[  +0.080408] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.071954] systemd-fstab-generator[767]: Ignoring "noauto" for root device
	[  +1.145187] kauditd_printk_skb: 17 callbacks suppressed
	[  +3.158600] systemd-fstab-generator[1063]: Ignoring "noauto" for root device
	[  +4.088769] systemd-fstab-generator[1533]: Ignoring "noauto" for root device
	[  +8.420929] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.083130] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.074309] systemd-fstab-generator[2645]: Ignoring "noauto" for root device
	[Sep11 11:14] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.931113] kauditd_printk_skb: 15 callbacks suppressed
	[  +0.673313] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +39.949621] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [c21ec4ed71e7] <==
	* raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/11 11:13:52 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-11 11:13:52.062072 W | auth: simple token is not cryptographically signed
	2023-09-11 11:13:52.062875 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-11 11:13:52.064576 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-11 11:13:52.064947 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-11 11:13:52.065060 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-09-11 11:13:52.065082 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-11 11:13:52.065105 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/11 11:13:52 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/11 11:13:52 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-11 11:13:52.270583 I | etcdserver: published {Name:ingress-addon-legacy-131000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-11 11:13:52.290435 I | embed: ready to serve client requests
	2023-09-11 11:13:52.291074 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-11 11:13:52.344591 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-11 11:13:52.346452 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-11 11:13:52.350447 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-11 11:13:52.350473 I | embed: ready to serve client requests
	2023-09-11 11:13:52.350960 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  11:15:29 up 2 min,  0 users,  load average: 0.29, 0.16, 0.06
	Linux ingress-addon-legacy-131000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ecb53f892167] <==
	* E0911 11:13:54.487594       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0911 11:13:54.555634       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:13:54.556029       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:13:54.558218       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:13:54.569429       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0911 11:13:54.577888       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0911 11:13:55.454891       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0911 11:13:55.454948       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0911 11:13:55.484377       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0911 11:13:55.489683       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0911 11:13:55.489942       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0911 11:13:55.635054       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:13:55.645357       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0911 11:13:55.741444       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0911 11:13:55.741885       1 controller.go:609] quota admission added evaluator for: endpoints
	I0911 11:13:55.744166       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:13:56.775494       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0911 11:13:57.198753       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0911 11:13:57.356060       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0911 11:14:03.555723       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:14:12.220707       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0911 11:14:12.611409       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0911 11:14:16.121151       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0911 11:14:51.171985       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0911 11:15:22.125819       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [b95348e1d0d0] <==
	* E0911 11:14:12.244286       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"338d3d00-9ed3-4681-ac96-12ac3acffe1a", ResourceVersion:"215", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830027637, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400074bd80), FieldsType:"FieldsV1", FieldsV1:(*v1.Fiel
dsV1)(0x400074bee0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400074bf80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)
(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000b17040), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeS
ource)(0x4000b24000), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.
ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000b24020), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVol
umeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000b24060)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceLi
st(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000f2fdb0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40005b4568), Ac
tiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400098ddc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPo
licy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000b3900)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40005b45e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0911 11:14:12.279277       1 shared_informer.go:230] Caches are synced for stateful set 
	I0911 11:14:12.393960       1 shared_informer.go:230] Caches are synced for attach detach 
	I0911 11:14:12.513408       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0911 11:14:12.517308       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0911 11:14:12.530561       1 shared_informer.go:230] Caches are synced for HPA 
	I0911 11:14:12.609699       1 shared_informer.go:230] Caches are synced for deployment 
	I0911 11:14:12.613444       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c6f749d6-e1d2-4157-84c7-60cc032d712b", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0911 11:14:12.616820       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4ec695d9-c445-48e8-b16a-b569aca8d631", APIVersion:"apps/v1", ResourceVersion:"330", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-2k78q
	I0911 11:14:12.627127       1 shared_informer.go:230] Caches are synced for disruption 
	I0911 11:14:12.627137       1 disruption.go:339] Sending events to api server.
	I0911 11:14:12.693045       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:14:12.726929       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:14:12.729152       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0911 11:14:12.785751       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0911 11:14:12.785780       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0911 11:14:16.117836       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9eee8b46-b88c-43fd-8a03-4610db7c208d", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0911 11:14:16.131510       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"84e2f8fb-e69d-4758-929a-cd0f7c30b919", APIVersion:"apps/v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-jbt6s
	I0911 11:14:16.131527       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"cf00742c-03f4-4589-980b-77a1d94260a6", APIVersion:"batch/v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-6g286
	I0911 11:14:16.144987       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"dc453838-38ba-4777-a5ed-4b707611152c", APIVersion:"batch/v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-j49hq
	I0911 11:14:19.770838       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"dc453838-38ba-4777-a5ed-4b707611152c", APIVersion:"batch/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:14:19.802494       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"cf00742c-03f4-4589-980b-77a1d94260a6", APIVersion:"batch/v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:15:00.444147       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"9b12aef6-94d5-4a33-80e6-c52a6dbe2d7b", APIVersion:"apps/v1", ResourceVersion:"561", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0911 11:15:00.453382       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5f8bf970-8bb9-49aa-ba17-3cd52198ce04", APIVersion:"apps/v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-z5m7k
	E0911 11:15:26.875721       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-4j846" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [8322a44bb93a] <==
	* W0911 11:14:13.641729       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0911 11:14:13.645777       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0911 11:14:13.645798       1 server_others.go:186] Using iptables Proxier.
	I0911 11:14:13.646125       1 server.go:583] Version: v1.18.20
	I0911 11:14:13.646580       1 config.go:315] Starting service config controller
	I0911 11:14:13.646608       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0911 11:14:13.647045       1 config.go:133] Starting endpoints config controller
	I0911 11:14:13.647056       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0911 11:14:13.753100       1 shared_informer.go:230] Caches are synced for service config 
	I0911 11:14:13.753112       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [25f3228b2b8e] <==
	* W0911 11:13:54.515717       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 11:13:54.515747       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:13:54.515766       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:13:54.521382       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0911 11:13:54.521391       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0911 11:13:54.522740       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:13:54.522790       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:13:54.522861       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0911 11:13:54.523253       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0911 11:13:54.526159       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:13:54.526238       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:13:54.526590       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:13:54.526694       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:13:54.526763       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:13:54.526858       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:13:54.527079       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:13:54.527165       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:13:54.527207       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:13:54.527233       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:13:54.527331       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:13:54.527399       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:13:55.382841       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:13:55.452106       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:13:55.484709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0911 11:13:58.023238       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:13:30 UTC, ends at Mon 2023-09-11 11:15:29 UTC. --
	Sep 11 11:15:05 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:05.337639    2651 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ac1cffd53506fde8caa4c079fe04a462944ab12953e3f95ee05a5c4fe4cfd822
	Sep 11 11:15:05 ingress-addon-legacy-131000 kubelet[2651]: E0911 11:15:05.339519    2651 pod_workers.go:191] Error syncing pod 71e8d52f-5dec-4e2a-8a47-bd8a20d38746 ("hello-world-app-5f5d8b66bb-z5m7k_default(71e8d52f-5dec-4e2a-8a47-bd8a20d38746)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-z5m7k_default(71e8d52f-5dec-4e2a-8a47-bd8a20d38746)"
	Sep 11 11:15:08 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:08.625336    2651 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 28a380cb065c5f515d1cad43f2c5c58d5dcfafefc6ab1c939e587360ecc32002
	Sep 11 11:15:08 ingress-addon-legacy-131000 kubelet[2651]: E0911 11:15:08.626955    2651 pod_workers.go:191] Error syncing pod 6725f0b2-af3b-4e0f-9581-e7b39daa5ba8 ("kube-ingress-dns-minikube_kube-system(6725f0b2-af3b-4e0f-9581-e7b39daa5ba8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(6725f0b2-af3b-4e0f-9581-e7b39daa5ba8)"
	Sep 11 11:15:15 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:15.922477    2651 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-f5pmc" (UniqueName: "kubernetes.io/secret/6725f0b2-af3b-4e0f-9581-e7b39daa5ba8-minikube-ingress-dns-token-f5pmc") pod "6725f0b2-af3b-4e0f-9581-e7b39daa5ba8" (UID: "6725f0b2-af3b-4e0f-9581-e7b39daa5ba8")
	Sep 11 11:15:15 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:15.925086    2651 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6725f0b2-af3b-4e0f-9581-e7b39daa5ba8-minikube-ingress-dns-token-f5pmc" (OuterVolumeSpecName: "minikube-ingress-dns-token-f5pmc") pod "6725f0b2-af3b-4e0f-9581-e7b39daa5ba8" (UID: "6725f0b2-af3b-4e0f-9581-e7b39daa5ba8"). InnerVolumeSpecName "minikube-ingress-dns-token-f5pmc". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:15:16 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:16.024935    2651 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-f5pmc" (UniqueName: "kubernetes.io/secret/6725f0b2-af3b-4e0f-9581-e7b39daa5ba8-minikube-ingress-dns-token-f5pmc") on node "ingress-addon-legacy-131000" DevicePath ""
	Sep 11 11:15:18 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:18.560531    2651 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 28a380cb065c5f515d1cad43f2c5c58d5dcfafefc6ab1c939e587360ecc32002
	Sep 11 11:15:18 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:18.624643    2651 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ac1cffd53506fde8caa4c079fe04a462944ab12953e3f95ee05a5c4fe4cfd822
	Sep 11 11:15:18 ingress-addon-legacy-131000 kubelet[2651]: W0911 11:15:18.725862    2651 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod71e8d52f-5dec-4e2a-8a47-bd8a20d38746/05215d40293515cad7d93a94aa9160af212af9a71af29f258b43dabf97fbd1b6": none of the resources are being tracked.
	Sep 11 11:15:19 ingress-addon-legacy-131000 kubelet[2651]: W0911 11:15:19.592745    2651 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-z5m7k through plugin: invalid network status for
	Sep 11 11:15:19 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:19.599102    2651 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ac1cffd53506fde8caa4c079fe04a462944ab12953e3f95ee05a5c4fe4cfd822
	Sep 11 11:15:19 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:19.599434    2651 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 05215d40293515cad7d93a94aa9160af212af9a71af29f258b43dabf97fbd1b6
	Sep 11 11:15:19 ingress-addon-legacy-131000 kubelet[2651]: E0911 11:15:19.599679    2651 pod_workers.go:191] Error syncing pod 71e8d52f-5dec-4e2a-8a47-bd8a20d38746 ("hello-world-app-5f5d8b66bb-z5m7k_default(71e8d52f-5dec-4e2a-8a47-bd8a20d38746)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-z5m7k_default(71e8d52f-5dec-4e2a-8a47-bd8a20d38746)"
	Sep 11 11:15:20 ingress-addon-legacy-131000 kubelet[2651]: W0911 11:15:20.617211    2651 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-z5m7k through plugin: invalid network status for
	Sep 11 11:15:22 ingress-addon-legacy-131000 kubelet[2651]: E0911 11:15:22.115222    2651 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-jbt6s.1783d3f239082459", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-jbt6s", UID:"d822ffd2-060d-4bf4-8e6a-bd311e851215", APIVersion:"v1", ResourceVersion:"424", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-131000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137db9286c8c059, ext:84983834892, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137db9286c8c059, ext:84983834892, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-jbt6s.1783d3f239082459" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:15:22 ingress-addon-legacy-131000 kubelet[2651]: E0911 11:15:22.123500    2651 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-jbt6s.1783d3f239082459", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-jbt6s", UID:"d822ffd2-060d-4bf4-8e6a-bd311e851215", APIVersion:"v1", ResourceVersion:"424", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-131000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137db9286c8c059, ext:84983834892, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137db92870caa01, ext:84988285620, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-jbt6s.1783d3f239082459" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:15:24 ingress-addon-legacy-131000 kubelet[2651]: W0911 11:15:24.721180    2651 pod_container_deletor.go:77] Container "9c082b45b0b5b28acd2b4cc705c8befc5369f19c7e323dd9e1e3b9a2ff93b0d2" not found in pod's containers
	Sep 11 11:15:26 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:26.244211    2651 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d822ffd2-060d-4bf4-8e6a-bd311e851215-webhook-cert") pod "d822ffd2-060d-4bf4-8e6a-bd311e851215" (UID: "d822ffd2-060d-4bf4-8e6a-bd311e851215")
	Sep 11 11:15:26 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:26.244288    2651 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-tx268" (UniqueName: "kubernetes.io/secret/d822ffd2-060d-4bf4-8e6a-bd311e851215-ingress-nginx-token-tx268") pod "d822ffd2-060d-4bf4-8e6a-bd311e851215" (UID: "d822ffd2-060d-4bf4-8e6a-bd311e851215")
	Sep 11 11:15:26 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:26.250955    2651 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d822ffd2-060d-4bf4-8e6a-bd311e851215-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d822ffd2-060d-4bf4-8e6a-bd311e851215" (UID: "d822ffd2-060d-4bf4-8e6a-bd311e851215"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:15:26 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:26.261136    2651 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d822ffd2-060d-4bf4-8e6a-bd311e851215-ingress-nginx-token-tx268" (OuterVolumeSpecName: "ingress-nginx-token-tx268") pod "d822ffd2-060d-4bf4-8e6a-bd311e851215" (UID: "d822ffd2-060d-4bf4-8e6a-bd311e851215"). InnerVolumeSpecName "ingress-nginx-token-tx268". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:15:26 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:26.346180    2651 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d822ffd2-060d-4bf4-8e6a-bd311e851215-webhook-cert") on node "ingress-addon-legacy-131000" DevicePath ""
	Sep 11 11:15:26 ingress-addon-legacy-131000 kubelet[2651]: I0911 11:15:26.346273    2651 reconciler.go:319] Volume detached for volume "ingress-nginx-token-tx268" (UniqueName: "kubernetes.io/secret/d822ffd2-060d-4bf4-8e6a-bd311e851215-ingress-nginx-token-tx268") on node "ingress-addon-legacy-131000" DevicePath ""
	Sep 11 11:15:27 ingress-addon-legacy-131000 kubelet[2651]: W0911 11:15:27.648875    2651 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d822ffd2-060d-4bf4-8e6a-bd311e851215/volumes" does not exist
	
	* 
	* ==> storage-provisioner [7862dec76d7d] <==
	* I0911 11:14:14.560876       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 11:14:14.566906       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 11:14:14.566974       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 11:14:14.569366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 11:14:14.569857       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"28c8f354-3b09-4d7e-9ecb-6eaacd0c4518", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-131000_7f22ca1f-988f-45c7-adf9-989b0fbe3b91 became leader
	I0911 11:14:14.569887       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-131000_7f22ca1f-988f-45c7-adf9-989b0fbe3b91!
	I0911 11:14:14.670773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-131000_7f22ca1f-988f-45c7-adf9-989b0fbe3b91!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-131000 -n ingress-addon-legacy-131000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-131000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (57.08s)

                                                
                                    
x
+
TestMinikubeProfile (18.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-265000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-265000 --driver=qemu2 : exit status 90 (17.662006291s)

                                                
                                                
-- stdout --
	* [first-265000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node first-265000 in cluster first-265000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-265000 --driver=qemu2 ": exit status 90
panic.go:522: *** TestMinikubeProfile FAILED at 2023-09-11 04:16:45.216424 -0700 PDT m=+2597.762823251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-266000 -n second-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-266000 -n second-266000: exit status 85 (41.273958ms)

                                                
                                                
-- stdout --
	* Profile "second-266000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-266000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-266000" host is not running, skipping log retrieval (state="* Profile \"second-266000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-266000\"")
helpers_test.go:175: Cleaning up "second-266000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-266000
panic.go:522: *** TestMinikubeProfile FAILED at 2023-09-11 04:16:45.496302 -0700 PDT m=+2598.042701709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-265000 -n first-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-265000 -n first-265000: exit status 6 (74.216333ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:16:45.565639    2755 status.go:415] kubeconfig endpoint: extract IP: "first-265000" does not appear in /Users/jenkins/minikube-integration/17225-951/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "first-265000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "first-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-265000
--- FAIL: TestMinikubeProfile (18.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (101.04s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-256000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p mount-start-2-256000 ssh -- ls /minikube-host: exit status 1 (1m15.038164417s)

                                                
                                                
** stderr ** 
	ssh: dial tcp 192.168.105.10:22: connect: operation timed out

                                                
                                                
** /stderr **
mount_start_test.go:116: mount failed: "out/minikube-darwin-arm64 -p mount-start-2-256000 ssh -- ls /minikube-host" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-2-256000 -n mount-start-2-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-2-256000 -n mount-start-2-256000: exit status 3 (25.998195166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:19:02.850436    2816 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out
	E0911 04:19:02.850472    2816 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "mount-start-2-256000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/VerifyMountPostDelete (101.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (378.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-arm64 -p multinode-705000 node stop m03: (3.057033917s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status
E0911 04:22:16.262272    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 status: exit status 7 (2m30.039127208s)

                                                
                                                
-- stdout --
	multinode-705000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-705000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-705000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:22:43.020692    3002 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0911 04:22:43.020715    3002 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0911 04:23:58.023742    3002 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0911 04:23:58.023756    3002 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr
E0911 04:24:32.395248    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:25:00.103001    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr: exit status 7 (2m30.040931542s)

                                                
                                                
-- stdout --
	multinode-705000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-705000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-705000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:23:58.054759    3018 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:23:58.054895    3018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:23:58.054901    3018 out.go:309] Setting ErrFile to fd 2...
	I0911 04:23:58.054903    3018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:23:58.055025    3018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:23:58.055149    3018 out.go:303] Setting JSON to false
	I0911 04:23:58.055162    3018 mustload.go:65] Loading cluster: multinode-705000
	I0911 04:23:58.055214    3018 notify.go:220] Checking for updates...
	I0911 04:23:58.055374    3018 config.go:182] Loaded profile config "multinode-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:23:58.055379    3018 status.go:255] checking status of multinode-705000 ...
	I0911 04:23:58.056055    3018 status.go:330] multinode-705000 host status = "Running" (err=<nil>)
	I0911 04:23:58.056063    3018 host.go:66] Checking if "multinode-705000" exists ...
	I0911 04:23:58.056170    3018 host.go:66] Checking if "multinode-705000" exists ...
	I0911 04:23:58.056280    3018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 04:23:58.056291    3018 sshutil.go:53] new ssh client: &{IP:192.168.105.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/id_rsa Username:docker}
	W0911 04:25:13.058634    3018 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.11:22: connect: operation timed out
	W0911 04:25:13.060324    3018 start.go:275] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0911 04:25:13.060337    3018 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	I0911 04:25:13.060342    3018 status.go:257] multinode-705000 status: &{Name:multinode-705000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0911 04:25:13.060351    3018 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	I0911 04:25:13.060355    3018 status.go:255] checking status of multinode-705000-m02 ...
	I0911 04:25:13.061017    3018 status.go:330] multinode-705000-m02 host status = "Running" (err=<nil>)
	I0911 04:25:13.061023    3018 host.go:66] Checking if "multinode-705000-m02" exists ...
	I0911 04:25:13.061117    3018 host.go:66] Checking if "multinode-705000-m02" exists ...
	I0911 04:25:13.061226    3018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 04:25:13.061233    3018 sshutil.go:53] new ssh client: &{IP:192.168.105.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m02/id_rsa Username:docker}
	W0911 04:26:28.064229    3018 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.12:22: connect: operation timed out
	W0911 04:26:28.064294    3018 start.go:275] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0911 04:26:28.064308    3018 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	I0911 04:26:28.064312    3018 status.go:257] multinode-705000-m02 status: &{Name:multinode-705000-m02 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0911 04:26:28.064323    3018 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	I0911 04:26:28.064327    3018 status.go:255] checking status of multinode-705000-m03 ...
	I0911 04:26:28.064902    3018 status.go:330] multinode-705000-m03 host status = "Stopped" (err=<nil>)
	I0911 04:26:28.064908    3018 status.go:343] host is not running, skipping remaining checks
	I0911 04:26:28.064911    3018 status.go:257] multinode-705000-m03 status: &{Name:multinode-705000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr": multinode-705000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
multinode-705000-m02
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
multinode-705000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000: exit status 3 (1m15.040386208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:27:43.105254    3033 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0911 04:27:43.105280    3033 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-705000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopNode (378.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (230.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 node start m03 --alsologtostderr: exit status 80 (5.079816667s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-705000-m03 in cluster multinode-705000
	* Restarting existing qemu2 VM for "multinode-705000-m03" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-705000-m03" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:27:43.135507    3045 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:27:43.135720    3045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:27:43.135726    3045 out.go:309] Setting ErrFile to fd 2...
	I0911 04:27:43.135728    3045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:27:43.135846    3045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:27:43.136086    3045 mustload.go:65] Loading cluster: multinode-705000
	I0911 04:27:43.136287    3045 config.go:182] Loaded profile config "multinode-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	W0911 04:27:43.136474    3045 host.go:58] "multinode-705000-m03" host status: Stopped
	I0911 04:27:43.140132    3045 out.go:177] * Starting worker node multinode-705000-m03 in cluster multinode-705000
	I0911 04:27:43.143969    3045 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:27:43.143982    3045 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:27:43.143991    3045 cache.go:57] Caching tarball of preloaded images
	I0911 04:27:43.144046    3045 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:27:43.144051    3045 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:27:43.144113    3045 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/multinode-705000/config.json ...
	I0911 04:27:43.144418    3045 start.go:365] acquiring machines lock for multinode-705000-m03: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:27:43.144453    3045 start.go:369] acquired machines lock for "multinode-705000-m03" in 22.5µs
	I0911 04:27:43.144462    3045 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:27:43.144465    3045 fix.go:54] fixHost starting: m03
	I0911 04:27:43.144556    3045 fix.go:102] recreateIfNeeded on multinode-705000-m03: state=Stopped err=<nil>
	W0911 04:27:43.144562    3045 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:27:43.149119    3045 out.go:177] * Restarting existing qemu2 VM for "multinode-705000-m03" ...
	I0911 04:27:43.153107    3045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:19:43:d7:34:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/disk.qcow2
	I0911 04:27:43.155232    3045 main.go:141] libmachine: STDOUT: 
	I0911 04:27:43.155247    3045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:27:43.155283    3045 fix.go:56] fixHost completed within 10.814667ms
	I0911 04:27:43.155325    3045 start.go:83] releasing machines lock for "multinode-705000-m03", held for 10.868667ms
	W0911 04:27:43.155332    3045 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:27:43.155357    3045 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:27:43.155360    3045 start.go:687] Will try again in 5 seconds ...
	I0911 04:27:48.157405    3045 start.go:365] acquiring machines lock for multinode-705000-m03: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:27:48.157564    3045 start.go:369] acquired machines lock for "multinode-705000-m03" in 136.167µs
	I0911 04:27:48.157602    3045 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:27:48.157606    3045 fix.go:54] fixHost starting: m03
	I0911 04:27:48.157766    3045 fix.go:102] recreateIfNeeded on multinode-705000-m03: state=Stopped err=<nil>
	W0911 04:27:48.157783    3045 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:27:48.161839    3045 out.go:177] * Restarting existing qemu2 VM for "multinode-705000-m03" ...
	I0911 04:27:48.165832    3045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:19:43:d7:34:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/disk.qcow2
	I0911 04:27:48.167915    3045 main.go:141] libmachine: STDOUT: 
	I0911 04:27:48.167928    3045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:27:48.167946    3045 fix.go:56] fixHost completed within 10.339916ms
	I0911 04:27:48.168002    3045 start.go:83] releasing machines lock for "multinode-705000-m03", held for 10.433125ms
	W0911 04:27:48.168046    3045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:27:48.171740    3045 out.go:177] 
	W0911 04:27:48.175841    3045 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:27:48.175846    3045 out.go:239] * 
	* 
	W0911 04:27:48.177614    3045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:27:48.181690    3045 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0911 04:27:43.135507    3045 out.go:296] Setting OutFile to fd 1 ...
I0911 04:27:43.135720    3045 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:27:43.135726    3045 out.go:309] Setting ErrFile to fd 2...
I0911 04:27:43.135728    3045 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:27:43.135846    3045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
I0911 04:27:43.136086    3045 mustload.go:65] Loading cluster: multinode-705000
I0911 04:27:43.136287    3045 config.go:182] Loaded profile config "multinode-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
W0911 04:27:43.136474    3045 host.go:58] "multinode-705000-m03" host status: Stopped
I0911 04:27:43.140132    3045 out.go:177] * Starting worker node multinode-705000-m03 in cluster multinode-705000
I0911 04:27:43.143969    3045 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0911 04:27:43.143982    3045 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
I0911 04:27:43.143991    3045 cache.go:57] Caching tarball of preloaded images
I0911 04:27:43.144046    3045 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0911 04:27:43.144051    3045 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
I0911 04:27:43.144113    3045 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/multinode-705000/config.json ...
I0911 04:27:43.144418    3045 start.go:365] acquiring machines lock for multinode-705000-m03: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0911 04:27:43.144453    3045 start.go:369] acquired machines lock for "multinode-705000-m03" in 22.5µs
I0911 04:27:43.144462    3045 start.go:96] Skipping create...Using existing machine configuration
I0911 04:27:43.144465    3045 fix.go:54] fixHost starting: m03
I0911 04:27:43.144556    3045 fix.go:102] recreateIfNeeded on multinode-705000-m03: state=Stopped err=<nil>
W0911 04:27:43.144562    3045 fix.go:128] unexpected machine state, will restart: <nil>
I0911 04:27:43.149119    3045 out.go:177] * Restarting existing qemu2 VM for "multinode-705000-m03" ...
I0911 04:27:43.153107    3045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:19:43:d7:34:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/disk.qcow2
I0911 04:27:43.155232    3045 main.go:141] libmachine: STDOUT: 
I0911 04:27:43.155247    3045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0911 04:27:43.155283    3045 fix.go:56] fixHost completed within 10.814667ms
I0911 04:27:43.155325    3045 start.go:83] releasing machines lock for "multinode-705000-m03", held for 10.868667ms
W0911 04:27:43.155332    3045 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0911 04:27:43.155357    3045 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0911 04:27:43.155360    3045 start.go:687] Will try again in 5 seconds ...
I0911 04:27:48.157405    3045 start.go:365] acquiring machines lock for multinode-705000-m03: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0911 04:27:48.157564    3045 start.go:369] acquired machines lock for "multinode-705000-m03" in 136.167µs
I0911 04:27:48.157602    3045 start.go:96] Skipping create...Using existing machine configuration
I0911 04:27:48.157606    3045 fix.go:54] fixHost starting: m03
I0911 04:27:48.157766    3045 fix.go:102] recreateIfNeeded on multinode-705000-m03: state=Stopped err=<nil>
W0911 04:27:48.157783    3045 fix.go:128] unexpected machine state, will restart: <nil>
I0911 04:27:48.161839    3045 out.go:177] * Restarting existing qemu2 VM for "multinode-705000-m03" ...
I0911 04:27:48.165832    3045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:19:43:d7:34:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000-m03/disk.qcow2
I0911 04:27:48.167915    3045 main.go:141] libmachine: STDOUT: 
I0911 04:27:48.167928    3045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0911 04:27:48.167946    3045 fix.go:56] fixHost completed within 10.339916ms
I0911 04:27:48.168002    3045 start.go:83] releasing machines lock for "multinode-705000-m03", held for 10.433125ms
W0911 04:27:48.168046    3045 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0911 04:27:48.171740    3045 out.go:177] 
W0911 04:27:48.175841    3045 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0911 04:27:48.175846    3045 out.go:239] * 
* 
W0911 04:27:48.177614    3045 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0911 04:27:48.181690    3045 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-705000 node start m03 --alsologtostderr": exit status 80
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status
E0911 04:29:32.396725    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 status: exit status 7 (2m30.038865125s)

                                                
                                                
-- stdout --
	multinode-705000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-705000-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-705000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:29:03.221723    3049 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0911 04:29:03.221744    3049 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0911 04:30:18.224382    3049 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out
	E0911 04:30:18.224396    3049 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.12:22: connect: operation timed out

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-705000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000: exit status 3 (1m15.036017708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 04:31:33.260584    3095 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out
	E0911 04:31:33.260595    3095 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.11:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-705000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StartAfterStop (230.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (41.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-705000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-705000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-705000: (36.152000417s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-705000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-705000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.223259958s)

                                                
                                                
-- stdout --
	* [multinode-705000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-705000 in cluster multinode-705000
	* Restarting existing qemu2 VM for "multinode-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:32:09.508725    3122 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:32:09.508889    3122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:09.508893    3122 out.go:309] Setting ErrFile to fd 2...
	I0911 04:32:09.508896    3122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:09.509048    3122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:32:09.510369    3122 out.go:303] Setting JSON to false
	I0911 04:32:09.530420    3122 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3703,"bootTime":1694428226,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:32:09.530483    3122 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:32:09.535268    3122 out.go:177] * [multinode-705000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:32:09.542400    3122 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:32:09.546295    3122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:32:09.542466    3122 notify.go:220] Checking for updates...
	I0911 04:32:09.552358    3122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:32:09.555287    3122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:32:09.558364    3122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:32:09.561384    3122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:32:09.564615    3122 config.go:182] Loaded profile config "multinode-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:32:09.564658    3122 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:32:09.569363    3122 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:32:09.576367    3122 start.go:298] selected driver: qemu2
	I0911 04:32:09.576396    3122 start.go:902] validating driver "qemu2" against &{Name:multinode-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-705000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false ina
ccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:32:09.576658    3122 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:32:09.579475    3122 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:32:09.579506    3122 cni.go:84] Creating CNI manager for ""
	I0911 04:32:09.579510    3122 cni.go:136] 3 nodes found, recommending kindnet
	I0911 04:32:09.579513    3122 start_flags.go:321] config:
	{Name:multinode-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-705000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Sock
etVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:32:09.583802    3122 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:09.592131    3122 out.go:177] * Starting control plane node multinode-705000 in cluster multinode-705000
	I0911 04:32:09.596366    3122 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:32:09.596385    3122 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:32:09.596403    3122 cache.go:57] Caching tarball of preloaded images
	I0911 04:32:09.596463    3122 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:32:09.596468    3122 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:32:09.596552    3122 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/multinode-705000/config.json ...
	I0911 04:32:09.596905    3122 start.go:365] acquiring machines lock for multinode-705000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:32:09.596934    3122 start.go:369] acquired machines lock for "multinode-705000" in 23.625µs
	I0911 04:32:09.596943    3122 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:32:09.596947    3122 fix.go:54] fixHost starting: 
	I0911 04:32:09.597062    3122 fix.go:102] recreateIfNeeded on multinode-705000: state=Stopped err=<nil>
	W0911 04:32:09.597070    3122 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:32:09.601199    3122 out.go:177] * Restarting existing qemu2 VM for "multinode-705000" ...
	I0911 04:32:09.609369    3122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:e2:ac:f7:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/disk.qcow2
	I0911 04:32:09.611187    3122 main.go:141] libmachine: STDOUT: 
	I0911 04:32:09.611211    3122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:32:09.611239    3122 fix.go:56] fixHost completed within 14.29075ms
	I0911 04:32:09.611243    3122 start.go:83] releasing machines lock for "multinode-705000", held for 14.305625ms
	W0911 04:32:09.611249    3122 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:32:09.611289    3122 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:32:09.611293    3122 start.go:687] Will try again in 5 seconds ...
	I0911 04:32:14.613413    3122 start.go:365] acquiring machines lock for multinode-705000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:32:14.613933    3122 start.go:369] acquired machines lock for "multinode-705000" in 350.167µs
	I0911 04:32:14.614083    3122 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:32:14.614106    3122 fix.go:54] fixHost starting: 
	I0911 04:32:14.614796    3122 fix.go:102] recreateIfNeeded on multinode-705000: state=Stopped err=<nil>
	W0911 04:32:14.614824    3122 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:32:14.619303    3122 out.go:177] * Restarting existing qemu2 VM for "multinode-705000" ...
	I0911 04:32:14.626365    3122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:e2:ac:f7:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/disk.qcow2
	I0911 04:32:14.635553    3122 main.go:141] libmachine: STDOUT: 
	I0911 04:32:14.635620    3122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:32:14.635718    3122 fix.go:56] fixHost completed within 21.613083ms
	I0911 04:32:14.635741    3122 start.go:83] releasing machines lock for "multinode-705000", held for 21.782167ms
	W0911 04:32:14.635940    3122 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:32:14.644258    3122 out.go:177] 
	W0911 04:32:14.648342    3122 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:32:14.648371    3122 out.go:239] * 
	* 
	W0911 04:32:14.651030    3122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:32:14.657227    3122 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-705000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-705000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000: exit status 7 (32.813291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (41.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 node delete m03: exit status 89 (38.422ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-705000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-705000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr: exit status 7 (28.862ms)

                                                
                                                
-- stdout --
	multinode-705000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-705000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-705000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:32:14.835830    3137 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:32:14.835952    3137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:14.835955    3137 out.go:309] Setting ErrFile to fd 2...
	I0911 04:32:14.835958    3137 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:14.836080    3137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:32:14.836185    3137 out.go:303] Setting JSON to false
	I0911 04:32:14.836197    3137 mustload.go:65] Loading cluster: multinode-705000
	I0911 04:32:14.836266    3137 notify.go:220] Checking for updates...
	I0911 04:32:14.836392    3137 config.go:182] Loaded profile config "multinode-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:32:14.836397    3137 status.go:255] checking status of multinode-705000 ...
	I0911 04:32:14.836573    3137 status.go:330] multinode-705000 host status = "Stopped" (err=<nil>)
	I0911 04:32:14.836576    3137 status.go:343] host is not running, skipping remaining checks
	I0911 04:32:14.836578    3137 status.go:257] multinode-705000 status: &{Name:multinode-705000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0911 04:32:14.836587    3137 status.go:255] checking status of multinode-705000-m02 ...
	I0911 04:32:14.836682    3137 status.go:330] multinode-705000-m02 host status = "Stopped" (err=<nil>)
	I0911 04:32:14.836684    3137 status.go:343] host is not running, skipping remaining checks
	I0911 04:32:14.836686    3137 status.go:257] multinode-705000-m02 status: &{Name:multinode-705000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0911 04:32:14.836690    3137 status.go:255] checking status of multinode-705000-m03 ...
	I0911 04:32:14.836779    3137 status.go:330] multinode-705000-m03 host status = "Stopped" (err=<nil>)
	I0911 04:32:14.836782    3137 status.go:343] host is not running, skipping remaining checks
	I0911 04:32:14.836783    3137 status.go:257] multinode-705000-m03 status: &{Name:multinode-705000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000: exit status 7 (28.173417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 status: exit status 7 (30.123208ms)

                                                
                                                
-- stdout --
	multinode-705000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-705000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-705000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr: exit status 7 (28.898833ms)

                                                
                                                
-- stdout --
	multinode-705000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-705000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-705000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:32:15.007948    3145 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:32:15.008088    3145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:15.008091    3145 out.go:309] Setting ErrFile to fd 2...
	I0911 04:32:15.008093    3145 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:15.008219    3145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:32:15.008350    3145 out.go:303] Setting JSON to false
	I0911 04:32:15.008361    3145 mustload.go:65] Loading cluster: multinode-705000
	I0911 04:32:15.008423    3145 notify.go:220] Checking for updates...
	I0911 04:32:15.008560    3145 config.go:182] Loaded profile config "multinode-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:32:15.008564    3145 status.go:255] checking status of multinode-705000 ...
	I0911 04:32:15.008747    3145 status.go:330] multinode-705000 host status = "Stopped" (err=<nil>)
	I0911 04:32:15.008751    3145 status.go:343] host is not running, skipping remaining checks
	I0911 04:32:15.008754    3145 status.go:257] multinode-705000 status: &{Name:multinode-705000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0911 04:32:15.008763    3145 status.go:255] checking status of multinode-705000-m02 ...
	I0911 04:32:15.008875    3145 status.go:330] multinode-705000-m02 host status = "Stopped" (err=<nil>)
	I0911 04:32:15.008877    3145 status.go:343] host is not running, skipping remaining checks
	I0911 04:32:15.008879    3145 status.go:257] multinode-705000-m02 status: &{Name:multinode-705000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0911 04:32:15.008883    3145 status.go:255] checking status of multinode-705000-m03 ...
	I0911 04:32:15.008972    3145 status.go:330] multinode-705000-m03 host status = "Stopped" (err=<nil>)
	I0911 04:32:15.008975    3145 status.go:343] host is not running, skipping remaining checks
	I0911 04:32:15.008976    3145 status.go:257] multinode-705000-m03 status: &{Name:multinode-705000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr": multinode-705000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-705000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-705000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr": multinode-705000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-705000-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-705000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000: exit status 7 (28.69325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-705000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-705000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.16931125s)

                                                
                                                
-- stdout --
	* [multinode-705000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-705000 in cluster multinode-705000
	* Restarting existing qemu2 VM for "multinode-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-705000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:32:15.065001    3149 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:32:15.065104    3149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:15.065106    3149 out.go:309] Setting ErrFile to fd 2...
	I0911 04:32:15.065109    3149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:15.065215    3149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:32:15.066145    3149 out.go:303] Setting JSON to false
	I0911 04:32:15.080968    3149 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3709,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:32:15.081035    3149 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:32:15.085899    3149 out.go:177] * [multinode-705000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:32:15.092914    3149 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:32:15.096706    3149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:32:15.092985    3149 notify.go:220] Checking for updates...
	I0911 04:32:15.102852    3149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:32:15.105901    3149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:32:15.108909    3149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:32:15.111859    3149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:32:15.115331    3149 config.go:182] Loaded profile config "multinode-705000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:32:15.115589    3149 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:32:15.119836    3149 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:32:15.126889    3149 start.go:298] selected driver: qemu2
	I0911 04:32:15.126895    3149 start.go:902] validating driver "qemu2" against &{Name:multinode-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-705000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:fal
se inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:32:15.126972    3149 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:32:15.128863    3149 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:32:15.128924    3149 cni.go:84] Creating CNI manager for ""
	I0911 04:32:15.128928    3149 cni.go:136] 3 nodes found, recommending kindnet
	I0911 04:32:15.128940    3149 start_flags.go:321] config:
	{Name:multinode-705000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-705000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.11 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.12 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.105.13 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:f
alse istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clien
t SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:32:15.132692    3149 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:15.136897    3149 out.go:177] * Starting control plane node multinode-705000 in cluster multinode-705000
	I0911 04:32:15.143807    3149 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:32:15.143828    3149 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:32:15.143838    3149 cache.go:57] Caching tarball of preloaded images
	I0911 04:32:15.143891    3149 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:32:15.143896    3149 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:32:15.143960    3149 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/multinode-705000/config.json ...
	I0911 04:32:15.144311    3149 start.go:365] acquiring machines lock for multinode-705000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:32:15.144337    3149 start.go:369] acquired machines lock for "multinode-705000" in 19.792µs
	I0911 04:32:15.144346    3149 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:32:15.144351    3149 fix.go:54] fixHost starting: 
	I0911 04:32:15.144467    3149 fix.go:102] recreateIfNeeded on multinode-705000: state=Stopped err=<nil>
	W0911 04:32:15.144474    3149 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:32:15.147861    3149 out.go:177] * Restarting existing qemu2 VM for "multinode-705000" ...
	I0911 04:32:15.155823    3149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:e2:ac:f7:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/disk.qcow2
	I0911 04:32:15.157822    3149 main.go:141] libmachine: STDOUT: 
	I0911 04:32:15.157840    3149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:32:15.157872    3149 fix.go:56] fixHost completed within 13.519916ms
	I0911 04:32:15.157931    3149 start.go:83] releasing machines lock for "multinode-705000", held for 13.590542ms
	W0911 04:32:15.157939    3149 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:32:15.157978    3149 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:32:15.157983    3149 start.go:687] Will try again in 5 seconds ...
	I0911 04:32:20.160114    3149 start.go:365] acquiring machines lock for multinode-705000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:32:20.160415    3149 start.go:369] acquired machines lock for "multinode-705000" in 239.583µs
	I0911 04:32:20.160495    3149 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:32:20.160507    3149 fix.go:54] fixHost starting: 
	I0911 04:32:20.160924    3149 fix.go:102] recreateIfNeeded on multinode-705000: state=Stopped err=<nil>
	W0911 04:32:20.160939    3149 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:32:20.165232    3149 out.go:177] * Restarting existing qemu2 VM for "multinode-705000" ...
	I0911 04:32:20.172434    3149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:e2:ac:f7:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/multinode-705000/disk.qcow2
	I0911 04:32:20.178253    3149 main.go:141] libmachine: STDOUT: 
	I0911 04:32:20.178289    3149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:32:20.178354    3149 fix.go:56] fixHost completed within 17.843ms
	I0911 04:32:20.178370    3149 start.go:83] releasing machines lock for "multinode-705000", held for 17.936875ms
	W0911 04:32:20.178544    3149 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-705000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:32:20.186151    3149 out.go:177] 
	W0911 04:32:20.190285    3149 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:32:20.190318    3149 out.go:239] * 
	* 
	W0911 04:32:20.191628    3149 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:32:20.201161    3149 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-705000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000: exit status 7 (58.984667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (10.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-705000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-705000-m03 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-705000-m03 --driver=qemu2 : exit status 14 (96.979667ms)

                                                
                                                
-- stdout --
	* [multinode-705000-m03] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-705000-m03' is duplicated with machine name 'multinode-705000-m03' in profile 'multinode-705000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-705000-m04 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-705000-m04 --driver=qemu2 : exit status 80 (10.13291675s)

                                                
                                                
-- stdout --
	* [multinode-705000-m04] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-705000-m04 in cluster multinode-705000-m04
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-705000-m04" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-705000-m04" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-705000-m04 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-705000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-705000: exit status 89 (78.935125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-705000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-705000-m04
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-705000 -n multinode-705000: exit status 7 (29.455833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-705000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (10.47s)

                                                
                                    
x
+
TestPreload (9.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.682251334s)

                                                
                                                
-- stdout --
	* [test-preload-972000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-972000 in cluster test-preload-972000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:32:30.949514    3197 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:32:30.949639    3197 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:30.949641    3197 out.go:309] Setting ErrFile to fd 2...
	I0911 04:32:30.949643    3197 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:32:30.949754    3197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:32:30.950727    3197 out.go:303] Setting JSON to false
	I0911 04:32:30.965842    3197 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3724,"bootTime":1694428226,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:32:30.965924    3197 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:32:30.971056    3197 out.go:177] * [test-preload-972000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:32:30.977850    3197 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:32:30.977905    3197 notify.go:220] Checking for updates...
	I0911 04:32:30.982049    3197 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:32:30.985051    3197 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:32:30.986395    3197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:32:30.989053    3197 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:32:30.992057    3197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:32:30.995213    3197 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:32:30.998953    3197 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:32:31.005992    3197 start.go:298] selected driver: qemu2
	I0911 04:32:31.005998    3197 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:32:31.006004    3197 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:32:31.007866    3197 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:32:31.010990    3197 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:32:31.014157    3197 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:32:31.014186    3197 cni.go:84] Creating CNI manager for ""
	I0911 04:32:31.014194    3197 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:32:31.014198    3197 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:32:31.014210    3197 start_flags.go:321] config:
	{Name:test-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-972000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:32:31.018211    3197 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.025016    3197 out.go:177] * Starting control plane node test-preload-972000 in cluster test-preload-972000
	I0911 04:32:31.029013    3197 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0911 04:32:31.029100    3197 cache.go:107] acquiring lock: {Name:mka16b08b08162019ebcf8baf85ee0a972ec736d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029112    3197 cache.go:107] acquiring lock: {Name:mk219304bd07b2f58c2c2fe5b63365e6d6a63b42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029119    3197 cache.go:107] acquiring lock: {Name:mk32f29bfacf7eba28e59e7c0bf44da463e54ea8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029132    3197 cache.go:107] acquiring lock: {Name:mk5ce18dbd6426bb3efe8e873e515df2cdb76221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029242    3197 cache.go:107] acquiring lock: {Name:mka651f8ddacc8a48ddb950ef9d018c87ae288fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029306    3197 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/test-preload-972000/config.json ...
	I0911 04:32:31.029329    3197 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0911 04:32:31.029338    3197 cache.go:107] acquiring lock: {Name:mk342b6771e617d9b18cbde32631b9dbc8e60ce2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029363    3197 cache.go:107] acquiring lock: {Name:mk0a17aa32beda3656f09b4192a92058ee0ef2e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029361    3197 cache.go:107] acquiring lock: {Name:mk66d885f46e5f7d268708c999cc53de0141dd2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:32:31.029371    3197 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/test-preload-972000/config.json: {Name:mk052398c5f167349eb297a1a741d7d56e901c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:32:31.029352    3197 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0911 04:32:31.029418    3197 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0911 04:32:31.029495    3197 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0911 04:32:31.029497    3197 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:32:31.029330    3197 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0911 04:32:31.029626    3197 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 04:32:31.029647    3197 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0911 04:32:31.029662    3197 start.go:365] acquiring machines lock for test-preload-972000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:32:31.029696    3197 start.go:369] acquired machines lock for "test-preload-972000" in 24.25µs
	I0911 04:32:31.029707    3197 start.go:93] Provisioning new machine with config: &{Name:test-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-972000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:32:31.029748    3197 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:32:31.037829    3197 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:32:31.042546    3197 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:32:31.042617    3197 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0911 04:32:31.042706    3197 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0911 04:32:31.043128    3197 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0911 04:32:31.043232    3197 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0911 04:32:31.044635    3197 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 04:32:31.044662    3197 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0911 04:32:31.044759    3197 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0911 04:32:31.053209    3197 start.go:159] libmachine.API.Create for "test-preload-972000" (driver="qemu2")
	I0911 04:32:31.053224    3197 client.go:168] LocalClient.Create starting
	I0911 04:32:31.053299    3197 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:32:31.053326    3197 main.go:141] libmachine: Decoding PEM data...
	I0911 04:32:31.053337    3197 main.go:141] libmachine: Parsing certificate...
	I0911 04:32:31.053376    3197 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:32:31.053393    3197 main.go:141] libmachine: Decoding PEM data...
	I0911 04:32:31.053401    3197 main.go:141] libmachine: Parsing certificate...
	I0911 04:32:31.053715    3197 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:32:31.179147    3197 main.go:141] libmachine: Creating SSH key...
	I0911 04:32:31.232990    3197 main.go:141] libmachine: Creating Disk image...
	I0911 04:32:31.233002    3197 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:32:31.233139    3197 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2
	I0911 04:32:31.242143    3197 main.go:141] libmachine: STDOUT: 
	I0911 04:32:31.242163    3197 main.go:141] libmachine: STDERR: 
	I0911 04:32:31.242227    3197 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2 +20000M
	I0911 04:32:31.249699    3197 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:32:31.249713    3197 main.go:141] libmachine: STDERR: 
	I0911 04:32:31.249734    3197 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2
	I0911 04:32:31.249742    3197 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:32:31.249779    3197 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:86:12:0f:b8:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2
	I0911 04:32:31.251223    3197 main.go:141] libmachine: STDOUT: 
	I0911 04:32:31.251235    3197 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:32:31.251256    3197 client.go:171] LocalClient.Create took 198.026708ms
	W0911 04:32:32.068585    3197 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0911 04:32:32.068610    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0911 04:32:32.073632    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0911 04:32:32.114428    3197 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0911 04:32:32.114463    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0911 04:32:32.278846    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 04:32:32.278861    3197 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.249765584s
	I0911 04:32:32.278870    3197 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 04:32:32.390154    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0911 04:32:32.559433    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0911 04:32:32.748855    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0911 04:32:32.966293    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0911 04:32:33.147295    3197 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0911 04:32:33.243154    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0911 04:32:33.243242    3197 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.213941916s
	I0911 04:32:33.243265    3197 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0911 04:32:33.251464    3197 start.go:128] duration metric: createHost completed in 2.221694791s
	I0911 04:32:33.251492    3197 start.go:83] releasing machines lock for "test-preload-972000", held for 2.221786833s
	W0911 04:32:33.251541    3197 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:32:33.261772    3197 out.go:177] * Deleting "test-preload-972000" in qemu2 ...
	W0911 04:32:33.280095    3197 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:32:33.280121    3197 start.go:687] Will try again in 5 seconds ...
	I0911 04:32:34.139116    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0911 04:32:34.139163    3197 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.109892125s
	I0911 04:32:34.139197    3197 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0911 04:32:35.098603    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0911 04:32:35.098652    3197 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.0694185s
	I0911 04:32:35.098685    3197 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0911 04:32:36.287064    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0911 04:32:36.287112    3197 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.257979167s
	I0911 04:32:36.287144    3197 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0911 04:32:36.915024    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0911 04:32:36.915084    3197 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.885835291s
	I0911 04:32:36.915111    3197 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0911 04:32:38.280258    3197 start.go:365] acquiring machines lock for test-preload-972000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:32:38.280684    3197 start.go:369] acquired machines lock for "test-preload-972000" in 356.291µs
	I0911 04:32:38.280800    3197 start.go:93] Provisioning new machine with config: &{Name:test-preload-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-972000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:32:38.281090    3197 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:32:38.288583    3197 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:32:38.336259    3197 start.go:159] libmachine.API.Create for "test-preload-972000" (driver="qemu2")
	I0911 04:32:38.336293    3197 client.go:168] LocalClient.Create starting
	I0911 04:32:38.336391    3197 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:32:38.336447    3197 main.go:141] libmachine: Decoding PEM data...
	I0911 04:32:38.336466    3197 main.go:141] libmachine: Parsing certificate...
	I0911 04:32:38.336528    3197 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:32:38.336565    3197 main.go:141] libmachine: Decoding PEM data...
	I0911 04:32:38.336581    3197 main.go:141] libmachine: Parsing certificate...
	I0911 04:32:38.337081    3197 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:32:38.430556    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0911 04:32:38.430580    3197 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.401489708s
	I0911 04:32:38.430588    3197 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0911 04:32:38.491585    3197 main.go:141] libmachine: Creating SSH key...
	I0911 04:32:38.543072    3197 main.go:141] libmachine: Creating Disk image...
	I0911 04:32:38.543077    3197 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:32:38.543223    3197 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2
	I0911 04:32:38.551748    3197 main.go:141] libmachine: STDOUT: 
	I0911 04:32:38.551763    3197 main.go:141] libmachine: STDERR: 
	I0911 04:32:38.551841    3197 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2 +20000M
	I0911 04:32:38.559193    3197 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:32:38.559206    3197 main.go:141] libmachine: STDERR: 
	I0911 04:32:38.559220    3197 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2
	I0911 04:32:38.559224    3197 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:32:38.559280    3197 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:b6:5f:7d:53:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/test-preload-972000/disk.qcow2
	I0911 04:32:38.560799    3197 main.go:141] libmachine: STDOUT: 
	I0911 04:32:38.560810    3197 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:32:38.560823    3197 client.go:171] LocalClient.Create took 224.526334ms
	I0911 04:32:40.441720    3197 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0911 04:32:40.441790    3197 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.412681292s
	I0911 04:32:40.441824    3197 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0911 04:32:40.441863    3197 cache.go:87] Successfully saved all images to host disk.
	I0911 04:32:40.563060    3197 start.go:128] duration metric: createHost completed in 2.281920375s
	I0911 04:32:40.563112    3197 start.go:83] releasing machines lock for "test-preload-972000", held for 2.282406375s
	W0911 04:32:40.563508    3197 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:32:40.574913    3197 out.go:177] 
	W0911 04:32:40.578050    3197 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:32:40.578075    3197 out.go:239] * 
	* 
	W0911 04:32:40.580645    3197 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:32:40.589950    3197 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-972000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-09-11 04:32:40.605496 -0700 PDT m=+3553.152828209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-972000 -n test-preload-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-972000 -n test-preload-972000: exit status 7 (65.420792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-972000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-972000
--- FAIL: TestPreload (9.85s)

                                                
                                    
x
+
TestScheduledStopUnix (9.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-884000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-884000 --memory=2048 --driver=qemu2 : exit status 80 (9.685471083s)

                                                
                                                
-- stdout --
	* [scheduled-stop-884000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-884000 in cluster scheduled-stop-884000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-884000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-884000 in cluster scheduled-stop-884000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-09-11 04:32:50.460168 -0700 PDT m=+3563.007509126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-884000 -n scheduled-stop-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-884000 -n scheduled-stop-884000: exit status 7 (68.6ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-884000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-884000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-884000
--- FAIL: TestScheduledStopUnix (9.85s)

                                                
                                    
x
+
TestSkaffold (11.92s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2050319742 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-796000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-796000 --memory=2600 --driver=qemu2 : exit status 80 (9.787561833s)

                                                
                                                
-- stdout --
	* [skaffold-796000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-796000 in cluster skaffold-796000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-796000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-796000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-796000 in cluster skaffold-796000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-796000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-09-11 04:33:02.388399 -0700 PDT m=+3574.935752043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-796000 -n skaffold-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-796000 -n skaffold-796000: exit status 7 (61.851583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-796000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-796000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-796000
--- FAIL: TestSkaffold (11.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (138.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-11 04:36:00.67274 -0700 PDT m=+3753.220267209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-619000 -n running-upgrade-619000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-619000 -n running-upgrade-619000: exit status 85 (84.548375ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-619000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-619000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-619000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-619000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-619000\"")
helpers_test.go:175: Cleaning up "running-upgrade-619000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-619000
--- FAIL: TestRunningBinaryUpgrade (138.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-055000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-055000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.800881416s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-055000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-055000 in cluster kubernetes-upgrade-055000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:36:01.023737    3682 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:36:01.023842    3682 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:36:01.023845    3682 out.go:309] Setting ErrFile to fd 2...
	I0911 04:36:01.023848    3682 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:36:01.023967    3682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:36:01.024953    3682 out.go:303] Setting JSON to false
	I0911 04:36:01.039855    3682 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3935,"bootTime":1694428226,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:36:01.039930    3682 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:36:01.044470    3682 out.go:177] * [kubernetes-upgrade-055000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:36:01.050490    3682 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:36:01.050536    3682 notify.go:220] Checking for updates...
	I0911 04:36:01.053494    3682 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:36:01.057498    3682 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:36:01.060505    3682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:36:01.063448    3682 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:36:01.066470    3682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:36:01.069667    3682 config.go:182] Loaded profile config "cert-expiration-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:36:01.069734    3682 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:36:01.074420    3682 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:36:01.080332    3682 start.go:298] selected driver: qemu2
	I0911 04:36:01.080338    3682 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:36:01.080345    3682 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:36:01.082302    3682 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:36:01.085443    3682 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:36:01.088558    3682 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 04:36:01.088583    3682 cni.go:84] Creating CNI manager for ""
	I0911 04:36:01.088590    3682 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:36:01.088602    3682 start_flags.go:321] config:
	{Name:kubernetes-upgrade-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-055000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:36:01.092587    3682 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:36:01.100408    3682 out.go:177] * Starting control plane node kubernetes-upgrade-055000 in cluster kubernetes-upgrade-055000
	I0911 04:36:01.104436    3682 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 04:36:01.104461    3682 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 04:36:01.104476    3682 cache.go:57] Caching tarball of preloaded images
	I0911 04:36:01.104538    3682 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:36:01.104544    3682 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 04:36:01.104615    3682 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/kubernetes-upgrade-055000/config.json ...
	I0911 04:36:01.104629    3682 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/kubernetes-upgrade-055000/config.json: {Name:mkc6fc17460916bfb36bed822725ddb917d66307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:36:01.104828    3682 start.go:365] acquiring machines lock for kubernetes-upgrade-055000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:36:01.104858    3682 start.go:369] acquired machines lock for "kubernetes-upgrade-055000" in 23.709µs
	I0911 04:36:01.104870    3682 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-055000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:36:01.104896    3682 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:36:01.112491    3682 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:36:01.128144    3682 start.go:159] libmachine.API.Create for "kubernetes-upgrade-055000" (driver="qemu2")
	I0911 04:36:01.128166    3682 client.go:168] LocalClient.Create starting
	I0911 04:36:01.128222    3682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:36:01.128253    3682 main.go:141] libmachine: Decoding PEM data...
	I0911 04:36:01.128262    3682 main.go:141] libmachine: Parsing certificate...
	I0911 04:36:01.128300    3682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:36:01.128319    3682 main.go:141] libmachine: Decoding PEM data...
	I0911 04:36:01.128330    3682 main.go:141] libmachine: Parsing certificate...
	I0911 04:36:01.128944    3682 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:36:01.244709    3682 main.go:141] libmachine: Creating SSH key...
	I0911 04:36:01.407048    3682 main.go:141] libmachine: Creating Disk image...
	I0911 04:36:01.407054    3682 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:36:01.407199    3682 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:01.415910    3682 main.go:141] libmachine: STDOUT: 
	I0911 04:36:01.415926    3682 main.go:141] libmachine: STDERR: 
	I0911 04:36:01.415978    3682 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2 +20000M
	I0911 04:36:01.423080    3682 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:36:01.423093    3682 main.go:141] libmachine: STDERR: 
	I0911 04:36:01.423105    3682 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:01.423112    3682 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:36:01.423151    3682 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:09:59:b2:c5:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:01.424595    3682 main.go:141] libmachine: STDOUT: 
	I0911 04:36:01.424609    3682 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:36:01.424628    3682 client.go:171] LocalClient.Create took 296.456209ms
	I0911 04:36:03.426805    3682 start.go:128] duration metric: createHost completed in 2.321894416s
	I0911 04:36:03.426933    3682 start.go:83] releasing machines lock for "kubernetes-upgrade-055000", held for 2.322003708s
	W0911 04:36:03.426998    3682 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:36:03.435315    3682 out.go:177] * Deleting "kubernetes-upgrade-055000" in qemu2 ...
	W0911 04:36:03.456032    3682 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:36:03.456062    3682 start.go:687] Will try again in 5 seconds ...
	I0911 04:36:08.457685    3682 start.go:365] acquiring machines lock for kubernetes-upgrade-055000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:36:08.458155    3682 start.go:369] acquired machines lock for "kubernetes-upgrade-055000" in 369.375µs
	I0911 04:36:08.458273    3682 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-055000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:36:08.458526    3682 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:36:08.467252    3682 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:36:08.513246    3682 start.go:159] libmachine.API.Create for "kubernetes-upgrade-055000" (driver="qemu2")
	I0911 04:36:08.513295    3682 client.go:168] LocalClient.Create starting
	I0911 04:36:08.513405    3682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:36:08.513470    3682 main.go:141] libmachine: Decoding PEM data...
	I0911 04:36:08.513489    3682 main.go:141] libmachine: Parsing certificate...
	I0911 04:36:08.513551    3682 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:36:08.513596    3682 main.go:141] libmachine: Decoding PEM data...
	I0911 04:36:08.513610    3682 main.go:141] libmachine: Parsing certificate...
	I0911 04:36:08.514189    3682 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:36:08.647607    3682 main.go:141] libmachine: Creating SSH key...
	I0911 04:36:08.737652    3682 main.go:141] libmachine: Creating Disk image...
	I0911 04:36:08.737658    3682 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:36:08.737797    3682 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:08.746558    3682 main.go:141] libmachine: STDOUT: 
	I0911 04:36:08.746574    3682 main.go:141] libmachine: STDERR: 
	I0911 04:36:08.746636    3682 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2 +20000M
	I0911 04:36:08.753825    3682 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:36:08.753839    3682 main.go:141] libmachine: STDERR: 
	I0911 04:36:08.753851    3682 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:08.753858    3682 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:36:08.753897    3682 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c1:3a:c4:18:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:08.755446    3682 main.go:141] libmachine: STDOUT: 
	I0911 04:36:08.755466    3682 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:36:08.755480    3682 client.go:171] LocalClient.Create took 242.177208ms
	I0911 04:36:10.757638    3682 start.go:128] duration metric: createHost completed in 2.299086667s
	I0911 04:36:10.757704    3682 start.go:83] releasing machines lock for "kubernetes-upgrade-055000", held for 2.299527083s
	W0911 04:36:10.758131    3682 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:36:10.768658    3682 out.go:177] 
	W0911 04:36:10.772648    3682 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:36:10.772675    3682 out.go:239] * 
	* 
	W0911 04:36:10.775379    3682 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:36:10.784571    3682 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-055000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-055000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-055000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-055000 status --format={{.Host}}: exit status 7 (36.196583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-055000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-055000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.183427166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-055000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-055000 in cluster kubernetes-upgrade-055000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-055000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-055000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:36:10.969502    3703 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:36:10.969615    3703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:36:10.969617    3703 out.go:309] Setting ErrFile to fd 2...
	I0911 04:36:10.969620    3703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:36:10.969731    3703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:36:10.970670    3703 out.go:303] Setting JSON to false
	I0911 04:36:10.985647    3703 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3944,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:36:10.985721    3703 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:36:10.990650    3703 out.go:177] * [kubernetes-upgrade-055000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:36:10.997618    3703 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:36:10.997666    3703 notify.go:220] Checking for updates...
	I0911 04:36:11.001686    3703 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:36:11.008614    3703 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:36:11.012686    3703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:36:11.015697    3703 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:36:11.018631    3703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:36:11.021927    3703 config.go:182] Loaded profile config "kubernetes-upgrade-055000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:36:11.022169    3703 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:36:11.026648    3703 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:36:11.033609    3703 start.go:298] selected driver: qemu2
	I0911 04:36:11.033614    3703 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-055000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:36:11.033681    3703 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:36:11.035788    3703 cni.go:84] Creating CNI manager for ""
	I0911 04:36:11.035804    3703 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:36:11.035809    3703 start_flags.go:321] config:
	{Name:kubernetes-upgrade-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-055000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:36:11.041023    3703 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:36:11.048593    3703 out.go:177] * Starting control plane node kubernetes-upgrade-055000 in cluster kubernetes-upgrade-055000
	I0911 04:36:11.052616    3703 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:36:11.052633    3703 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:36:11.052653    3703 cache.go:57] Caching tarball of preloaded images
	I0911 04:36:11.052711    3703 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:36:11.052718    3703 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:36:11.052784    3703 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/kubernetes-upgrade-055000/config.json ...
	I0911 04:36:11.053140    3703 start.go:365] acquiring machines lock for kubernetes-upgrade-055000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:36:11.053165    3703 start.go:369] acquired machines lock for "kubernetes-upgrade-055000" in 19.958µs
	I0911 04:36:11.053175    3703 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:36:11.053180    3703 fix.go:54] fixHost starting: 
	I0911 04:36:11.053301    3703 fix.go:102] recreateIfNeeded on kubernetes-upgrade-055000: state=Stopped err=<nil>
	W0911 04:36:11.053309    3703 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:36:11.061669    3703 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-055000" ...
	I0911 04:36:11.064694    3703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c1:3a:c4:18:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:11.066676    3703 main.go:141] libmachine: STDOUT: 
	I0911 04:36:11.066696    3703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:36:11.066727    3703 fix.go:56] fixHost completed within 13.545084ms
	I0911 04:36:11.066733    3703 start.go:83] releasing machines lock for "kubernetes-upgrade-055000", held for 13.563209ms
	W0911 04:36:11.066740    3703 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:36:11.066781    3703 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:36:11.066786    3703 start.go:687] Will try again in 5 seconds ...
	I0911 04:36:16.068969    3703 start.go:365] acquiring machines lock for kubernetes-upgrade-055000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:36:16.069410    3703 start.go:369] acquired machines lock for "kubernetes-upgrade-055000" in 360.042µs
	I0911 04:36:16.069560    3703 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:36:16.069584    3703 fix.go:54] fixHost starting: 
	I0911 04:36:16.070387    3703 fix.go:102] recreateIfNeeded on kubernetes-upgrade-055000: state=Stopped err=<nil>
	W0911 04:36:16.070413    3703 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:36:16.077065    3703 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-055000" ...
	I0911 04:36:16.081139    3703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c1:3a:c4:18:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubernetes-upgrade-055000/disk.qcow2
	I0911 04:36:16.090304    3703 main.go:141] libmachine: STDOUT: 
	I0911 04:36:16.090375    3703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:36:16.090471    3703 fix.go:56] fixHost completed within 20.891625ms
	I0911 04:36:16.090498    3703 start.go:83] releasing machines lock for "kubernetes-upgrade-055000", held for 21.062292ms
	W0911 04:36:16.090697    3703 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-055000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-055000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:36:16.098988    3703 out.go:177] 
	W0911 04:36:16.103102    3703 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:36:16.103139    3703 out.go:239] * 
	* 
	W0911 04:36:16.105320    3703 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:36:16.113874    3703 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-055000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-055000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-055000 version --output=json: exit status 1 (63.887333ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-055000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-09-11 04:36:16.192006 -0700 PDT m=+3768.739548626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-055000 -n kubernetes-upgrade-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-055000 -n kubernetes-upgrade-055000: exit status 7 (32.608333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-055000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-055000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-055000
--- FAIL: TestKubernetesUpgrade (15.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17225
- KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2381023269/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.83s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17225
- KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3414706814/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (153.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (153.15s)

                                                
                                    
x
+
TestPause/serial/Start (10.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-021000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-021000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.971814s)

                                                
                                                
-- stdout --
	* [pause-021000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-021000 in cluster pause-021000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-021000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-021000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-021000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-021000 -n pause-021000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-021000 -n pause-021000: exit status 7 (68.482625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-021000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 : exit status 80 (9.739696458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-980000 in cluster NoKubernetes-980000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-980000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (70.096875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 : exit status 80 (5.236245917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-980000
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (68.992167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247984042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-980000
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (68.107458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 : exit status 80 (5.228442084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-980000
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-980000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-980000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-980000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-980000 -n NoKubernetes-980000: exit status 7 (69.790709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-980000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.636252833s)

                                                
                                                
-- stdout --
	* [auto-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-838000 in cluster auto-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:37:19.386633    3832 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:37:19.386740    3832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:19.386743    3832 out.go:309] Setting ErrFile to fd 2...
	I0911 04:37:19.386746    3832 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:19.386854    3832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:37:19.387868    3832 out.go:303] Setting JSON to false
	I0911 04:37:19.403034    3832 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4013,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:37:19.403101    3832 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:37:19.407190    3832 out.go:177] * [auto-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:37:19.414093    3832 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:37:19.418098    3832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:37:19.414172    3832 notify.go:220] Checking for updates...
	I0911 04:37:19.424112    3832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:37:19.427135    3832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:37:19.430102    3832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:37:19.433024    3832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:37:19.436252    3832 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:37:19.440151    3832 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:37:19.447071    3832 start.go:298] selected driver: qemu2
	I0911 04:37:19.447080    3832 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:37:19.447090    3832 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:37:19.449192    3832 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:37:19.452121    3832 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:37:19.455121    3832 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:37:19.455144    3832 cni.go:84] Creating CNI manager for ""
	I0911 04:37:19.455151    3832 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:37:19.455155    3832 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:37:19.455161    3832 start_flags.go:321] config:
	{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0911 04:37:19.459515    3832 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:37:19.467153    3832 out.go:177] * Starting control plane node auto-838000 in cluster auto-838000
	I0911 04:37:19.471117    3832 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:37:19.471142    3832 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:37:19.471159    3832 cache.go:57] Caching tarball of preloaded images
	I0911 04:37:19.471234    3832 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:37:19.471240    3832 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:37:19.471430    3832 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/auto-838000/config.json ...
	I0911 04:37:19.471442    3832 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/auto-838000/config.json: {Name:mk73889e39508c54c7d792075f14abd0c950490a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:37:19.471677    3832 start.go:365] acquiring machines lock for auto-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:37:19.471710    3832 start.go:369] acquired machines lock for "auto-838000" in 26.417µs
	I0911 04:37:19.471722    3832 start.go:93] Provisioning new machine with config: &{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:37:19.471773    3832 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:37:19.480083    3832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:37:19.495782    3832 start.go:159] libmachine.API.Create for "auto-838000" (driver="qemu2")
	I0911 04:37:19.495807    3832 client.go:168] LocalClient.Create starting
	I0911 04:37:19.495871    3832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:37:19.495900    3832 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:19.495910    3832 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:19.495954    3832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:37:19.495973    3832 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:19.495986    3832 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:19.496321    3832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:37:19.612413    3832 main.go:141] libmachine: Creating SSH key...
	I0911 04:37:19.640840    3832 main.go:141] libmachine: Creating Disk image...
	I0911 04:37:19.640846    3832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:37:19.640975    3832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2
	I0911 04:37:19.649362    3832 main.go:141] libmachine: STDOUT: 
	I0911 04:37:19.649376    3832 main.go:141] libmachine: STDERR: 
	I0911 04:37:19.649424    3832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2 +20000M
	I0911 04:37:19.656522    3832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:37:19.656535    3832 main.go:141] libmachine: STDERR: 
	I0911 04:37:19.656550    3832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2
	I0911 04:37:19.656556    3832 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:37:19.656590    3832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:0e:0b:22:d0:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2
	I0911 04:37:19.658062    3832 main.go:141] libmachine: STDOUT: 
	I0911 04:37:19.658073    3832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:37:19.658092    3832 client.go:171] LocalClient.Create took 162.279667ms
	I0911 04:37:21.660249    3832 start.go:128] duration metric: createHost completed in 2.188462417s
	I0911 04:37:21.660317    3832 start.go:83] releasing machines lock for "auto-838000", held for 2.188596667s
	W0911 04:37:21.660376    3832 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:21.667824    3832 out.go:177] * Deleting "auto-838000" in qemu2 ...
	W0911 04:37:21.687767    3832 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:21.687791    3832 start.go:687] Will try again in 5 seconds ...
	I0911 04:37:26.690018    3832 start.go:365] acquiring machines lock for auto-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:37:26.690796    3832 start.go:369] acquired machines lock for "auto-838000" in 658.875µs
	I0911 04:37:26.690922    3832 start.go:93] Provisioning new machine with config: &{Name:auto-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:37:26.691191    3832 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:37:26.696139    3832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:37:26.743124    3832 start.go:159] libmachine.API.Create for "auto-838000" (driver="qemu2")
	I0911 04:37:26.743178    3832 client.go:168] LocalClient.Create starting
	I0911 04:37:26.743290    3832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:37:26.743343    3832 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:26.743367    3832 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:26.743434    3832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:37:26.743468    3832 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:26.743486    3832 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:26.744367    3832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:37:26.873278    3832 main.go:141] libmachine: Creating SSH key...
	I0911 04:37:26.934392    3832 main.go:141] libmachine: Creating Disk image...
	I0911 04:37:26.934397    3832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:37:26.934535    3832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2
	I0911 04:37:26.943026    3832 main.go:141] libmachine: STDOUT: 
	I0911 04:37:26.943040    3832 main.go:141] libmachine: STDERR: 
	I0911 04:37:26.943090    3832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2 +20000M
	I0911 04:37:26.950238    3832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:37:26.950251    3832 main.go:141] libmachine: STDERR: 
	I0911 04:37:26.950263    3832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2
	I0911 04:37:26.950276    3832 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:37:26.950303    3832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:44:c8:7e:0d:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/auto-838000/disk.qcow2
	I0911 04:37:26.951845    3832 main.go:141] libmachine: STDOUT: 
	I0911 04:37:26.951861    3832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:37:26.951873    3832 client.go:171] LocalClient.Create took 208.686625ms
	I0911 04:37:28.954027    3832 start.go:128] duration metric: createHost completed in 2.262808333s
	I0911 04:37:28.954089    3832 start.go:83] releasing machines lock for "auto-838000", held for 2.263271792s
	W0911 04:37:28.954455    3832 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:28.965043    3832 out.go:177] 
	W0911 04:37:28.969200    3832 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:37:28.969247    3832 out.go:239] * 
	* 
	W0911 04:37:28.971729    3832 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:37:28.982107    3832 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.768593s)

                                                
                                                
-- stdout --
	* [kindnet-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-838000 in cluster kindnet-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:37:31.066101    3942 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:37:31.066227    3942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:31.066230    3942 out.go:309] Setting ErrFile to fd 2...
	I0911 04:37:31.066233    3942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:31.066348    3942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:37:31.067403    3942 out.go:303] Setting JSON to false
	I0911 04:37:31.082337    3942 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4025,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:37:31.082408    3942 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:37:31.087881    3942 out.go:177] * [kindnet-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:37:31.094787    3942 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:37:31.091824    3942 notify.go:220] Checking for updates...
	I0911 04:37:31.101855    3942 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:37:31.105826    3942 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:37:31.108804    3942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:37:31.111854    3942 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:37:31.114783    3942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:37:31.117944    3942 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:37:31.121824    3942 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:37:31.128840    3942 start.go:298] selected driver: qemu2
	I0911 04:37:31.128846    3942 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:37:31.128853    3942 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:37:31.130763    3942 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:37:31.133799    3942 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:37:31.136875    3942 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:37:31.136896    3942 cni.go:84] Creating CNI manager for "kindnet"
	I0911 04:37:31.136900    3942 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0911 04:37:31.136905    3942 start_flags.go:321] config:
	{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:37:31.140969    3942 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:37:31.147900    3942 out.go:177] * Starting control plane node kindnet-838000 in cluster kindnet-838000
	I0911 04:37:31.151778    3942 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:37:31.151797    3942 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:37:31.151811    3942 cache.go:57] Caching tarball of preloaded images
	I0911 04:37:31.151873    3942 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:37:31.151879    3942 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:37:31.152075    3942 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/kindnet-838000/config.json ...
	I0911 04:37:31.152088    3942 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/kindnet-838000/config.json: {Name:mk20266dee9e9637c7d73c22fbd1462dd82382ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:37:31.152286    3942 start.go:365] acquiring machines lock for kindnet-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:37:31.152317    3942 start.go:369] acquired machines lock for "kindnet-838000" in 24.459µs
	I0911 04:37:31.152328    3942 start.go:93] Provisioning new machine with config: &{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:37:31.152359    3942 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:37:31.160834    3942 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:37:31.176262    3942 start.go:159] libmachine.API.Create for "kindnet-838000" (driver="qemu2")
	I0911 04:37:31.176288    3942 client.go:168] LocalClient.Create starting
	I0911 04:37:31.176344    3942 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:37:31.176368    3942 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:31.176380    3942 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:31.176424    3942 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:37:31.176442    3942 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:31.176451    3942 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:31.176797    3942 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:37:31.292488    3942 main.go:141] libmachine: Creating SSH key...
	I0911 04:37:31.426616    3942 main.go:141] libmachine: Creating Disk image...
	I0911 04:37:31.426625    3942 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:37:31.426799    3942 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2
	I0911 04:37:31.435448    3942 main.go:141] libmachine: STDOUT: 
	I0911 04:37:31.435463    3942 main.go:141] libmachine: STDERR: 
	I0911 04:37:31.435514    3942 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2 +20000M
	I0911 04:37:31.442617    3942 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:37:31.442630    3942 main.go:141] libmachine: STDERR: 
	I0911 04:37:31.442647    3942 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2
	I0911 04:37:31.442656    3942 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:37:31.442691    3942 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:8e:5f:78:c5:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2
	I0911 04:37:31.444194    3942 main.go:141] libmachine: STDOUT: 
	I0911 04:37:31.444206    3942 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:37:31.444222    3942 client.go:171] LocalClient.Create took 267.929583ms
	I0911 04:37:33.446371    3942 start.go:128] duration metric: createHost completed in 2.293998375s
	I0911 04:37:33.446477    3942 start.go:83] releasing machines lock for "kindnet-838000", held for 2.29411275s
	W0911 04:37:33.446579    3942 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:33.453871    3942 out.go:177] * Deleting "kindnet-838000" in qemu2 ...
	W0911 04:37:33.474038    3942 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:33.474064    3942 start.go:687] Will try again in 5 seconds ...
	I0911 04:37:38.476402    3942 start.go:365] acquiring machines lock for kindnet-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:37:38.476869    3942 start.go:369] acquired machines lock for "kindnet-838000" in 357.041µs
	I0911 04:37:38.476997    3942 start.go:93] Provisioning new machine with config: &{Name:kindnet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:37:38.477352    3942 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:37:38.483156    3942 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:37:38.527887    3942 start.go:159] libmachine.API.Create for "kindnet-838000" (driver="qemu2")
	I0911 04:37:38.527924    3942 client.go:168] LocalClient.Create starting
	I0911 04:37:38.528070    3942 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:37:38.528134    3942 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:38.528155    3942 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:38.528221    3942 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:37:38.528255    3942 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:38.528270    3942 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:38.528776    3942 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:37:38.661305    3942 main.go:141] libmachine: Creating SSH key...
	I0911 04:37:38.748529    3942 main.go:141] libmachine: Creating Disk image...
	I0911 04:37:38.748534    3942 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:37:38.748673    3942 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2
	I0911 04:37:38.757067    3942 main.go:141] libmachine: STDOUT: 
	I0911 04:37:38.757091    3942 main.go:141] libmachine: STDERR: 
	I0911 04:37:38.757150    3942 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2 +20000M
	I0911 04:37:38.764419    3942 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:37:38.764434    3942 main.go:141] libmachine: STDERR: 
	I0911 04:37:38.764450    3942 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2
	I0911 04:37:38.764457    3942 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:37:38.764489    3942 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:14:00:c3:50:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kindnet-838000/disk.qcow2
	I0911 04:37:38.765957    3942 main.go:141] libmachine: STDOUT: 
	I0911 04:37:38.765971    3942 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:37:38.765983    3942 client.go:171] LocalClient.Create took 238.055125ms
	I0911 04:37:40.768143    3942 start.go:128] duration metric: createHost completed in 2.290765834s
	I0911 04:37:40.768203    3942 start.go:83] releasing machines lock for "kindnet-838000", held for 2.291309041s
	W0911 04:37:40.768553    3942 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:40.777951    3942 out.go:177] 
	W0911 04:37:40.782437    3942 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:37:40.782473    3942 out.go:239] * 
	* 
	W0911 04:37:40.785056    3942 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:37:40.794349    3942 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.674689417s)

                                                
                                                
-- stdout --
	* [calico-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-838000 in cluster calico-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:37:42.992798    4058 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:37:42.992906    4058 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:42.992908    4058 out.go:309] Setting ErrFile to fd 2...
	I0911 04:37:42.992911    4058 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:42.993018    4058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:37:42.993972    4058 out.go:303] Setting JSON to false
	I0911 04:37:43.009081    4058 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4036,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:37:43.009157    4058 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:37:43.014892    4058 out.go:177] * [calico-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:37:43.018879    4058 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:37:43.021795    4058 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:37:43.018922    4058 notify.go:220] Checking for updates...
	I0911 04:37:43.028865    4058 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:37:43.031848    4058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:37:43.034895    4058 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:37:43.038025    4058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:37:43.039542    4058 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:37:43.043857    4058 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:37:43.050722    4058 start.go:298] selected driver: qemu2
	I0911 04:37:43.050727    4058 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:37:43.050732    4058 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:37:43.052577    4058 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:37:43.055875    4058 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:37:43.058937    4058 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:37:43.058961    4058 cni.go:84] Creating CNI manager for "calico"
	I0911 04:37:43.058965    4058 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0911 04:37:43.058971    4058 start_flags.go:321] config:
	{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0911 04:37:43.063235    4058 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:37:43.069883    4058 out.go:177] * Starting control plane node calico-838000 in cluster calico-838000
	I0911 04:37:43.073870    4058 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:37:43.073886    4058 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:37:43.073899    4058 cache.go:57] Caching tarball of preloaded images
	I0911 04:37:43.073953    4058 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:37:43.073959    4058 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:37:43.074557    4058 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/calico-838000/config.json ...
	I0911 04:37:43.074579    4058 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/calico-838000/config.json: {Name:mkbae87c8dada7b7a22fb1b1e169753feb93e68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:37:43.074820    4058 start.go:365] acquiring machines lock for calico-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:37:43.074851    4058 start.go:369] acquired machines lock for "calico-838000" in 24.792µs
	I0911 04:37:43.074862    4058 start.go:93] Provisioning new machine with config: &{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:37:43.074948    4058 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:37:43.083953    4058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:37:43.100232    4058 start.go:159] libmachine.API.Create for "calico-838000" (driver="qemu2")
	I0911 04:37:43.100264    4058 client.go:168] LocalClient.Create starting
	I0911 04:37:43.100320    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:37:43.100347    4058 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:43.100363    4058 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:43.100402    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:37:43.100421    4058 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:43.100431    4058 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:43.100767    4058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:37:43.220564    4058 main.go:141] libmachine: Creating SSH key...
	I0911 04:37:43.278779    4058 main.go:141] libmachine: Creating Disk image...
	I0911 04:37:43.278785    4058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:37:43.278924    4058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2
	I0911 04:37:43.287299    4058 main.go:141] libmachine: STDOUT: 
	I0911 04:37:43.287316    4058 main.go:141] libmachine: STDERR: 
	I0911 04:37:43.287368    4058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2 +20000M
	I0911 04:37:43.294527    4058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:37:43.294538    4058 main.go:141] libmachine: STDERR: 
	I0911 04:37:43.294553    4058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2
	I0911 04:37:43.294561    4058 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:37:43.294592    4058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6e:32:33:4d:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2
	I0911 04:37:43.296149    4058 main.go:141] libmachine: STDOUT: 
	I0911 04:37:43.296162    4058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:37:43.296181    4058 client.go:171] LocalClient.Create took 195.91125ms
	I0911 04:37:45.298416    4058 start.go:128] duration metric: createHost completed in 2.223421125s
	I0911 04:37:45.298500    4058 start.go:83] releasing machines lock for "calico-838000", held for 2.223641292s
	W0911 04:37:45.298575    4058 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:45.306883    4058 out.go:177] * Deleting "calico-838000" in qemu2 ...
	W0911 04:37:45.327430    4058 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:45.327459    4058 start.go:687] Will try again in 5 seconds ...
	I0911 04:37:50.329807    4058 start.go:365] acquiring machines lock for calico-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:37:50.330339    4058 start.go:369] acquired machines lock for "calico-838000" in 418.916µs
	I0911 04:37:50.330487    4058 start.go:93] Provisioning new machine with config: &{Name:calico-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:37:50.330826    4058 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:37:50.340532    4058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:37:50.387386    4058 start.go:159] libmachine.API.Create for "calico-838000" (driver="qemu2")
	I0911 04:37:50.387440    4058 client.go:168] LocalClient.Create starting
	I0911 04:37:50.387539    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:37:50.387594    4058 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:50.387609    4058 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:50.387690    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:37:50.387726    4058 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:50.387741    4058 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:50.388278    4058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:37:50.514457    4058 main.go:141] libmachine: Creating SSH key...
	I0911 04:37:50.582376    4058 main.go:141] libmachine: Creating Disk image...
	I0911 04:37:50.582382    4058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:37:50.582534    4058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2
	I0911 04:37:50.591075    4058 main.go:141] libmachine: STDOUT: 
	I0911 04:37:50.591092    4058 main.go:141] libmachine: STDERR: 
	I0911 04:37:50.591157    4058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2 +20000M
	I0911 04:37:50.598472    4058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:37:50.598486    4058 main.go:141] libmachine: STDERR: 
	I0911 04:37:50.598500    4058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2
	I0911 04:37:50.598506    4058 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:37:50.598542    4058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8a:e8:fc:61:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/calico-838000/disk.qcow2
	I0911 04:37:50.600091    4058 main.go:141] libmachine: STDOUT: 
	I0911 04:37:50.600107    4058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:37:50.600126    4058 client.go:171] LocalClient.Create took 212.677167ms
	I0911 04:37:52.602311    4058 start.go:128] duration metric: createHost completed in 2.271461458s
	I0911 04:37:52.602367    4058 start.go:83] releasing machines lock for "calico-838000", held for 2.272003917s
	W0911 04:37:52.602770    4058 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:52.611451    4058 out.go:177] 
	W0911 04:37:52.616452    4058 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:37:52.616475    4058 out.go:239] * 
	* 
	W0911 04:37:52.619255    4058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:37:52.627195    4058 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.675996958s)

                                                
                                                
-- stdout --
	* [custom-flannel-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-838000 in cluster custom-flannel-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:37:54.954690    4176 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:37:54.954801    4176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:54.954804    4176 out.go:309] Setting ErrFile to fd 2...
	I0911 04:37:54.954806    4176 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:37:54.954935    4176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:37:54.956055    4176 out.go:303] Setting JSON to false
	I0911 04:37:54.971070    4176 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4048,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:37:54.971144    4176 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:37:54.976760    4176 out.go:177] * [custom-flannel-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:37:54.983770    4176 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:37:54.987729    4176 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:37:54.983836    4176 notify.go:220] Checking for updates...
	I0911 04:37:54.990806    4176 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:37:54.993772    4176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:37:54.996724    4176 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:37:54.999814    4176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:37:55.002947    4176 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:37:55.006784    4176 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:37:55.013671    4176 start.go:298] selected driver: qemu2
	I0911 04:37:55.013676    4176 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:37:55.013682    4176 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:37:55.015525    4176 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:37:55.018806    4176 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:37:55.021885    4176 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:37:55.021920    4176 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0911 04:37:55.021931    4176 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0911 04:37:55.021936    4176 start_flags.go:321] config:
	{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:37:55.026008    4176 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:37:55.028787    4176 out.go:177] * Starting control plane node custom-flannel-838000 in cluster custom-flannel-838000
	I0911 04:37:55.036827    4176 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:37:55.036844    4176 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:37:55.036859    4176 cache.go:57] Caching tarball of preloaded images
	I0911 04:37:55.036917    4176 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:37:55.036922    4176 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:37:55.037488    4176 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/custom-flannel-838000/config.json ...
	I0911 04:37:55.037511    4176 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/custom-flannel-838000/config.json: {Name:mkcf7454a566b215eab6f27acef073a23edf31c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:37:55.037737    4176 start.go:365] acquiring machines lock for custom-flannel-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:37:55.037771    4176 start.go:369] acquired machines lock for "custom-flannel-838000" in 23.917µs
	I0911 04:37:55.037782    4176 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:37:55.037866    4176 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:37:55.049645    4176 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:37:55.065924    4176 start.go:159] libmachine.API.Create for "custom-flannel-838000" (driver="qemu2")
	I0911 04:37:55.065953    4176 client.go:168] LocalClient.Create starting
	I0911 04:37:55.066027    4176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:37:55.066058    4176 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:55.066072    4176 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:55.066112    4176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:37:55.066131    4176 main.go:141] libmachine: Decoding PEM data...
	I0911 04:37:55.066140    4176 main.go:141] libmachine: Parsing certificate...
	I0911 04:37:55.066481    4176 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:37:55.184451    4176 main.go:141] libmachine: Creating SSH key...
	I0911 04:37:55.256550    4176 main.go:141] libmachine: Creating Disk image...
	I0911 04:37:55.256555    4176 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:37:55.256706    4176 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0911 04:37:55.265142    4176 main.go:141] libmachine: STDOUT: 
	I0911 04:37:55.265155    4176 main.go:141] libmachine: STDERR: 
	I0911 04:37:55.265201    4176 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2 +20000M
	I0911 04:37:55.272383    4176 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:37:55.272397    4176 main.go:141] libmachine: STDERR: 
	I0911 04:37:55.272413    4176 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0911 04:37:55.272425    4176 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:37:55.272459    4176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:55:2f:a8:3e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0911 04:37:55.274000    4176 main.go:141] libmachine: STDOUT: 
	I0911 04:37:55.274013    4176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:37:55.274030    4176 client.go:171] LocalClient.Create took 208.072375ms
	I0911 04:37:57.276236    4176 start.go:128] duration metric: createHost completed in 2.238351541s
	I0911 04:37:57.276294    4176 start.go:83] releasing machines lock for "custom-flannel-838000", held for 2.238515959s
	W0911 04:37:57.276352    4176 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:57.283718    4176 out.go:177] * Deleting "custom-flannel-838000" in qemu2 ...
	W0911 04:37:57.307733    4176 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:37:57.307756    4176 start.go:687] Will try again in 5 seconds ...
	I0911 04:38:02.309812    4176 start.go:365] acquiring machines lock for custom-flannel-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:02.310256    4176 start.go:369] acquired machines lock for "custom-flannel-838000" in 342.375µs
	I0911 04:38:02.310380    4176 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:02.310730    4176 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:02.319402    4176 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:02.366075    4176 start.go:159] libmachine.API.Create for "custom-flannel-838000" (driver="qemu2")
	I0911 04:38:02.366099    4176 client.go:168] LocalClient.Create starting
	I0911 04:38:02.366221    4176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:02.366274    4176 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:02.366290    4176 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:02.366372    4176 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:02.366407    4176 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:02.366419    4176 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:02.366895    4176 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:02.493238    4176 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:02.545909    4176 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:02.545914    4176 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:02.546052    4176 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0911 04:38:02.554418    4176 main.go:141] libmachine: STDOUT: 
	I0911 04:38:02.554431    4176 main.go:141] libmachine: STDERR: 
	I0911 04:38:02.554493    4176 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2 +20000M
	I0911 04:38:02.561725    4176 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:02.561738    4176 main.go:141] libmachine: STDERR: 
	I0911 04:38:02.561751    4176 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0911 04:38:02.561756    4176 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:02.561796    4176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:6c:a5:e7:11:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/custom-flannel-838000/disk.qcow2
	I0911 04:38:02.563392    4176 main.go:141] libmachine: STDOUT: 
	I0911 04:38:02.563403    4176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:02.563416    4176 client.go:171] LocalClient.Create took 197.312791ms
	I0911 04:38:04.565560    4176 start.go:128] duration metric: createHost completed in 2.2548065s
	I0911 04:38:04.565632    4176 start.go:83] releasing machines lock for "custom-flannel-838000", held for 2.255356167s
	W0911 04:38:04.566077    4176 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:04.575704    4176 out.go:177] 
	W0911 04:38:04.579603    4176 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:38:04.579651    4176 out.go:239] * 
	* 
	W0911 04:38:04.582488    4176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:38:04.590654    4176 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.804824291s)

                                                
                                                
-- stdout --
	* [false-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-838000 in cluster false-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:38:06.923836    4294 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:38:06.923946    4294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:06.923949    4294 out.go:309] Setting ErrFile to fd 2...
	I0911 04:38:06.923952    4294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:06.924052    4294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:38:06.925031    4294 out.go:303] Setting JSON to false
	I0911 04:38:06.939805    4294 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4060,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:38:06.939859    4294 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:38:06.944219    4294 out.go:177] * [false-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:38:06.951181    4294 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:38:06.955130    4294 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:38:06.951245    4294 notify.go:220] Checking for updates...
	I0911 04:38:06.958162    4294 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:38:06.961149    4294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:38:06.964252    4294 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:38:06.967165    4294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:38:06.970589    4294 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:38:06.974088    4294 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:38:06.981132    4294 start.go:298] selected driver: qemu2
	I0911 04:38:06.981137    4294 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:38:06.981144    4294 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:38:06.983153    4294 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:38:06.986108    4294 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:38:06.989182    4294 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:38:06.989209    4294 cni.go:84] Creating CNI manager for "false"
	I0911 04:38:06.989215    4294 start_flags.go:321] config:
	{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:false-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0911 04:38:06.993159    4294 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:38:06.998125    4294 out.go:177] * Starting control plane node false-838000 in cluster false-838000
	I0911 04:38:07.002038    4294 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:38:07.002056    4294 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:38:07.002073    4294 cache.go:57] Caching tarball of preloaded images
	I0911 04:38:07.002127    4294 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:38:07.002133    4294 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:38:07.002318    4294 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/false-838000/config.json ...
	I0911 04:38:07.002330    4294 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/false-838000/config.json: {Name:mk0d8ee6bd664e75cc4a8c462c124f7d8aa4c1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:38:07.002560    4294 start.go:365] acquiring machines lock for false-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:07.002597    4294 start.go:369] acquired machines lock for "false-838000" in 30.5µs
	I0911 04:38:07.002610    4294 start.go:93] Provisioning new machine with config: &{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:07.002657    4294 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:07.011118    4294 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:07.026643    4294 start.go:159] libmachine.API.Create for "false-838000" (driver="qemu2")
	I0911 04:38:07.026680    4294 client.go:168] LocalClient.Create starting
	I0911 04:38:07.026745    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:07.026775    4294 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:07.026787    4294 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:07.026833    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:07.026851    4294 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:07.026861    4294 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:07.027195    4294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:07.156541    4294 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:07.216842    4294 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:07.216848    4294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:07.216991    4294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2
	I0911 04:38:07.225364    4294 main.go:141] libmachine: STDOUT: 
	I0911 04:38:07.225382    4294 main.go:141] libmachine: STDERR: 
	I0911 04:38:07.225425    4294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2 +20000M
	I0911 04:38:07.232526    4294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:07.232538    4294 main.go:141] libmachine: STDERR: 
	I0911 04:38:07.232561    4294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2
	I0911 04:38:07.232579    4294 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:07.232608    4294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:78:54:5e:79:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2
	I0911 04:38:07.234103    4294 main.go:141] libmachine: STDOUT: 
	I0911 04:38:07.234116    4294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:07.234136    4294 client.go:171] LocalClient.Create took 207.451ms
	I0911 04:38:09.236291    4294 start.go:128] duration metric: createHost completed in 2.233619459s
	I0911 04:38:09.236354    4294 start.go:83] releasing machines lock for "false-838000", held for 2.233749125s
	W0911 04:38:09.236469    4294 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:09.244883    4294 out.go:177] * Deleting "false-838000" in qemu2 ...
	W0911 04:38:09.264899    4294 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:09.264924    4294 start.go:687] Will try again in 5 seconds ...
	I0911 04:38:14.267141    4294 start.go:365] acquiring machines lock for false-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:14.267525    4294 start.go:369] acquired machines lock for "false-838000" in 301.625µs
	I0911 04:38:14.267650    4294 start.go:93] Provisioning new machine with config: &{Name:false-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:14.267924    4294 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:14.277430    4294 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:14.323700    4294 start.go:159] libmachine.API.Create for "false-838000" (driver="qemu2")
	I0911 04:38:14.323753    4294 client.go:168] LocalClient.Create starting
	I0911 04:38:14.323867    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:14.323946    4294 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:14.323966    4294 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:14.324034    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:14.324083    4294 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:14.324101    4294 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:14.324602    4294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:14.449855    4294 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:14.642927    4294 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:14.642934    4294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:14.643099    4294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2
	I0911 04:38:14.651733    4294 main.go:141] libmachine: STDOUT: 
	I0911 04:38:14.651748    4294 main.go:141] libmachine: STDERR: 
	I0911 04:38:14.651815    4294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2 +20000M
	I0911 04:38:14.658966    4294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:14.658980    4294 main.go:141] libmachine: STDERR: 
	I0911 04:38:14.658991    4294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2
	I0911 04:38:14.658998    4294 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:14.659035    4294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0a:d8:3d:e1:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/false-838000/disk.qcow2
	I0911 04:38:14.660583    4294 main.go:141] libmachine: STDOUT: 
	I0911 04:38:14.660599    4294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:14.660612    4294 client.go:171] LocalClient.Create took 336.854583ms
	I0911 04:38:16.662923    4294 start.go:128] duration metric: createHost completed in 2.394856958s
	I0911 04:38:16.663006    4294 start.go:83] releasing machines lock for "false-838000", held for 2.395459042s
	W0911 04:38:16.663510    4294 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:16.672287    4294 out.go:177] 
	W0911 04:38:16.676333    4294 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:38:16.676398    4294 out.go:239] * 
	* 
	W0911 04:38:16.679192    4294 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:38:16.692121    4294 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.835233s)

                                                
                                                
-- stdout --
	* [enable-default-cni-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-838000 in cluster enable-default-cni-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:38:18.835768    4404 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:38:18.835886    4404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:18.835889    4404 out.go:309] Setting ErrFile to fd 2...
	I0911 04:38:18.835891    4404 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:18.836002    4404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:38:18.837105    4404 out.go:303] Setting JSON to false
	I0911 04:38:18.852016    4404 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4072,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:38:18.852088    4404 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:38:18.856688    4404 out.go:177] * [enable-default-cni-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:38:18.862575    4404 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:38:18.866611    4404 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:38:18.862669    4404 notify.go:220] Checking for updates...
	I0911 04:38:18.872488    4404 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:38:18.875532    4404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:38:18.878446    4404 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:38:18.881562    4404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:38:18.884678    4404 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:38:18.888481    4404 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:38:18.895514    4404 start.go:298] selected driver: qemu2
	I0911 04:38:18.895520    4404 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:38:18.895527    4404 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:38:18.897463    4404 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:38:18.900495    4404 out.go:177] * Automatically selected the socket_vmnet network
	E0911 04:38:18.903536    4404 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0911 04:38:18.903546    4404 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:38:18.903562    4404 cni.go:84] Creating CNI manager for "bridge"
	I0911 04:38:18.903567    4404 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:38:18.903573    4404 start_flags.go:321] config:
	{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:38:18.907712    4404 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:38:18.910646    4404 out.go:177] * Starting control plane node enable-default-cni-838000 in cluster enable-default-cni-838000
	I0911 04:38:18.918509    4404 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:38:18.918534    4404 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:38:18.918551    4404 cache.go:57] Caching tarball of preloaded images
	I0911 04:38:18.918638    4404 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:38:18.918643    4404 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:38:18.918830    4404 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/enable-default-cni-838000/config.json ...
	I0911 04:38:18.918843    4404 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/enable-default-cni-838000/config.json: {Name:mk6a5d7cef49c13c0ae37e39dc2873d59d017bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:38:18.919070    4404 start.go:365] acquiring machines lock for enable-default-cni-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:18.919102    4404 start.go:369] acquired machines lock for "enable-default-cni-838000" in 24.458µs
	I0911 04:38:18.919113    4404 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:18.919152    4404 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:18.926511    4404 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:18.942269    4404 start.go:159] libmachine.API.Create for "enable-default-cni-838000" (driver="qemu2")
	I0911 04:38:18.942295    4404 client.go:168] LocalClient.Create starting
	I0911 04:38:18.942357    4404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:18.942394    4404 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:18.942407    4404 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:18.942451    4404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:18.942469    4404 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:18.942477    4404 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:18.942811    4404 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:19.059580    4404 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:19.153600    4404 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:19.153606    4404 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:19.153746    4404 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0911 04:38:19.162197    4404 main.go:141] libmachine: STDOUT: 
	I0911 04:38:19.162211    4404 main.go:141] libmachine: STDERR: 
	I0911 04:38:19.162270    4404 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2 +20000M
	I0911 04:38:19.169361    4404 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:19.169373    4404 main.go:141] libmachine: STDERR: 
	I0911 04:38:19.169394    4404 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0911 04:38:19.169404    4404 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:19.169438    4404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:65:5a:cb:71:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0911 04:38:19.170907    4404 main.go:141] libmachine: STDOUT: 
	I0911 04:38:19.170918    4404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:19.170936    4404 client.go:171] LocalClient.Create took 228.633833ms
	I0911 04:38:21.173109    4404 start.go:128] duration metric: createHost completed in 2.253944s
	I0911 04:38:21.173173    4404 start.go:83] releasing machines lock for "enable-default-cni-838000", held for 2.254064417s
	W0911 04:38:21.173234    4404 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:21.183782    4404 out.go:177] * Deleting "enable-default-cni-838000" in qemu2 ...
	W0911 04:38:21.205824    4404 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:21.205855    4404 start.go:687] Will try again in 5 seconds ...
	I0911 04:38:26.208125    4404 start.go:365] acquiring machines lock for enable-default-cni-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:26.208617    4404 start.go:369] acquired machines lock for "enable-default-cni-838000" in 375.458µs
	I0911 04:38:26.208725    4404 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:26.209236    4404 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:26.217936    4404 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:26.264798    4404 start.go:159] libmachine.API.Create for "enable-default-cni-838000" (driver="qemu2")
	I0911 04:38:26.264838    4404 client.go:168] LocalClient.Create starting
	I0911 04:38:26.264962    4404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:26.265037    4404 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:26.265087    4404 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:26.265166    4404 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:26.265207    4404 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:26.265224    4404 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:26.265775    4404 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:26.400396    4404 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:26.583066    4404 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:26.583072    4404 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:26.583219    4404 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0911 04:38:26.591930    4404 main.go:141] libmachine: STDOUT: 
	I0911 04:38:26.591945    4404 main.go:141] libmachine: STDERR: 
	I0911 04:38:26.592037    4404 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2 +20000M
	I0911 04:38:26.599287    4404 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:26.599300    4404 main.go:141] libmachine: STDERR: 
	I0911 04:38:26.599312    4404 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0911 04:38:26.599318    4404 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:26.599363    4404 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:6a:d1:12:20:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/enable-default-cni-838000/disk.qcow2
	I0911 04:38:26.600881    4404 main.go:141] libmachine: STDOUT: 
	I0911 04:38:26.600893    4404 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:26.600905    4404 client.go:171] LocalClient.Create took 336.060292ms
	I0911 04:38:28.603052    4404 start.go:128] duration metric: createHost completed in 2.393795792s
	I0911 04:38:28.603145    4404 start.go:83] releasing machines lock for "enable-default-cni-838000", held for 2.394476541s
	W0911 04:38:28.603566    4404 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:28.614274    4404 out.go:177] 
	W0911 04:38:28.618335    4404 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:38:28.618359    4404 out.go:239] * 
	* 
	W0911 04:38:28.621003    4404 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:38:28.630350    4404 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.838325709s)

                                                
                                                
-- stdout --
	* [flannel-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-838000 in cluster flannel-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:38:30.782734    4517 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:38:30.782842    4517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:30.782844    4517 out.go:309] Setting ErrFile to fd 2...
	I0911 04:38:30.782855    4517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:30.782978    4517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:38:30.783996    4517 out.go:303] Setting JSON to false
	I0911 04:38:30.799075    4517 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4084,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:38:30.799159    4517 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:38:30.804381    4517 out.go:177] * [flannel-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:38:30.807256    4517 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:38:30.811169    4517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:38:30.807324    4517 notify.go:220] Checking for updates...
	I0911 04:38:30.818131    4517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:38:30.821230    4517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:38:30.828170    4517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:38:30.831232    4517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:38:30.834402    4517 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:38:30.838122    4517 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:38:30.845258    4517 start.go:298] selected driver: qemu2
	I0911 04:38:30.845264    4517 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:38:30.845272    4517 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:38:30.847259    4517 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:38:30.850165    4517 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:38:30.853234    4517 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:38:30.853268    4517 cni.go:84] Creating CNI manager for "flannel"
	I0911 04:38:30.853272    4517 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0911 04:38:30.853278    4517 start_flags.go:321] config:
	{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:flannel-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:38:30.857560    4517 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:38:30.864127    4517 out.go:177] * Starting control plane node flannel-838000 in cluster flannel-838000
	I0911 04:38:30.868241    4517 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:38:30.868266    4517 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:38:30.868283    4517 cache.go:57] Caching tarball of preloaded images
	I0911 04:38:30.868344    4517 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:38:30.868349    4517 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:38:30.868974    4517 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/flannel-838000/config.json ...
	I0911 04:38:30.868999    4517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/flannel-838000/config.json: {Name:mke90c775cb010153376568bbada65fec10baf24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:38:30.869293    4517 start.go:365] acquiring machines lock for flannel-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:30.869468    4517 start.go:369] acquired machines lock for "flannel-838000" in 156.959µs
	I0911 04:38:30.869488    4517 start.go:93] Provisioning new machine with config: &{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:30.869535    4517 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:30.877186    4517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:30.892544    4517 start.go:159] libmachine.API.Create for "flannel-838000" (driver="qemu2")
	I0911 04:38:30.892575    4517 client.go:168] LocalClient.Create starting
	I0911 04:38:30.892631    4517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:30.892653    4517 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:30.892666    4517 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:30.892704    4517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:30.892722    4517 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:30.892728    4517 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:30.893096    4517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:31.009649    4517 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:31.211798    4517 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:31.211805    4517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:31.211975    4517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2
	I0911 04:38:31.220674    4517 main.go:141] libmachine: STDOUT: 
	I0911 04:38:31.220690    4517 main.go:141] libmachine: STDERR: 
	I0911 04:38:31.220759    4517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2 +20000M
	I0911 04:38:31.228057    4517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:31.228070    4517 main.go:141] libmachine: STDERR: 
	I0911 04:38:31.228093    4517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2
	I0911 04:38:31.228100    4517 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:31.228138    4517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:5e:51:70:dc:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2
	I0911 04:38:31.229637    4517 main.go:141] libmachine: STDOUT: 
	I0911 04:38:31.229649    4517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:31.229667    4517 client.go:171] LocalClient.Create took 337.08575ms
	I0911 04:38:33.231842    4517 start.go:128] duration metric: createHost completed in 2.362288292s
	I0911 04:38:33.231930    4517 start.go:83] releasing machines lock for "flannel-838000", held for 2.362427958s
	W0911 04:38:33.232000    4517 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:33.239469    4517 out.go:177] * Deleting "flannel-838000" in qemu2 ...
	W0911 04:38:33.260060    4517 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:33.260093    4517 start.go:687] Will try again in 5 seconds ...
	I0911 04:38:38.262399    4517 start.go:365] acquiring machines lock for flannel-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:38.262839    4517 start.go:369] acquired machines lock for "flannel-838000" in 335µs
	I0911 04:38:38.262993    4517 start.go:93] Provisioning new machine with config: &{Name:flannel-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:38.263301    4517 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:38.274983    4517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:38.320407    4517 start.go:159] libmachine.API.Create for "flannel-838000" (driver="qemu2")
	I0911 04:38:38.320448    4517 client.go:168] LocalClient.Create starting
	I0911 04:38:38.320576    4517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:38.320620    4517 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:38.320638    4517 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:38.320710    4517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:38.320744    4517 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:38.320759    4517 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:38.321253    4517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:38.456547    4517 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:38.533059    4517 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:38.533066    4517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:38.533207    4517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2
	I0911 04:38:38.541634    4517 main.go:141] libmachine: STDOUT: 
	I0911 04:38:38.541648    4517 main.go:141] libmachine: STDERR: 
	I0911 04:38:38.541754    4517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2 +20000M
	I0911 04:38:38.548980    4517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:38.548991    4517 main.go:141] libmachine: STDERR: 
	I0911 04:38:38.549007    4517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2
	I0911 04:38:38.549015    4517 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:38.549105    4517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:ed:99:b2:20:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/flannel-838000/disk.qcow2
	I0911 04:38:38.550660    4517 main.go:141] libmachine: STDOUT: 
	I0911 04:38:38.550670    4517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:38.550681    4517 client.go:171] LocalClient.Create took 230.228708ms
	I0911 04:38:40.552866    4517 start.go:128] duration metric: createHost completed in 2.289542625s
	I0911 04:38:40.552925    4517 start.go:83] releasing machines lock for "flannel-838000", held for 2.290062042s
	W0911 04:38:40.553562    4517 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:40.563329    4517 out.go:177] 
	W0911 04:38:40.567286    4517 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:38:40.567320    4517 out.go:239] * 
	* 
	W0911 04:38:40.569706    4517 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:38:40.580155    4517 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.706258209s)

                                                
                                                
-- stdout --
	* [bridge-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-838000 in cluster bridge-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:38:42.923261    4635 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:38:42.923367    4635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:42.923370    4635 out.go:309] Setting ErrFile to fd 2...
	I0911 04:38:42.923373    4635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:42.923489    4635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:38:42.924494    4635 out.go:303] Setting JSON to false
	I0911 04:38:42.939660    4635 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4096,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:38:42.939721    4635 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:38:42.942658    4635 out.go:177] * [bridge-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:38:42.949421    4635 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:38:42.952369    4635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:38:42.949465    4635 notify.go:220] Checking for updates...
	I0911 04:38:42.958377    4635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:38:42.959705    4635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:38:42.962395    4635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:38:42.965364    4635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:38:42.968584    4635 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:38:42.972309    4635 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:38:42.979342    4635 start.go:298] selected driver: qemu2
	I0911 04:38:42.979346    4635 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:38:42.979352    4635 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:38:42.981455    4635 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:38:42.984398    4635 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:38:42.987480    4635 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:38:42.987505    4635 cni.go:84] Creating CNI manager for "bridge"
	I0911 04:38:42.987509    4635 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:38:42.987514    4635 start_flags.go:321] config:
	{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0911 04:38:42.991834    4635 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:38:42.998350    4635 out.go:177] * Starting control plane node bridge-838000 in cluster bridge-838000
	I0911 04:38:43.002309    4635 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:38:43.002327    4635 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:38:43.002342    4635 cache.go:57] Caching tarball of preloaded images
	I0911 04:38:43.002398    4635 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:38:43.002403    4635 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:38:43.002598    4635 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/bridge-838000/config.json ...
	I0911 04:38:43.002611    4635 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/bridge-838000/config.json: {Name:mk5c8da8d870fee774ac69e0ac0d9edf182f1ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:38:43.002787    4635 start.go:365] acquiring machines lock for bridge-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:43.002817    4635 start.go:369] acquired machines lock for "bridge-838000" in 24.333µs
	I0911 04:38:43.002827    4635 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:43.002858    4635 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:43.007396    4635 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:43.022339    4635 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I0911 04:38:43.022372    4635 client.go:168] LocalClient.Create starting
	I0911 04:38:43.022431    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:43.022455    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:43.022468    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:43.022508    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:43.022526    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:43.022532    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:43.022842    4635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:43.142978    4635 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:43.264847    4635 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:43.264853    4635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:43.265007    4635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:43.273300    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:43.273315    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:43.273396    4635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I0911 04:38:43.280562    4635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:43.280577    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:43.280611    4635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:43.280618    4635 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:43.280653    4635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:43:84:b5:6e:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:43.282071    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:43.282083    4635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:43.282103    4635 client.go:171] LocalClient.Create took 259.723875ms
	I0911 04:38:45.284295    4635 start.go:128] duration metric: createHost completed in 2.281409083s
	I0911 04:38:45.284383    4635 start.go:83] releasing machines lock for "bridge-838000", held for 2.281558875s
	W0911 04:38:45.284493    4635 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:45.292970    4635 out.go:177] * Deleting "bridge-838000" in qemu2 ...
	W0911 04:38:45.313659    4635 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:45.313689    4635 start.go:687] Will try again in 5 seconds ...
	I0911 04:38:50.315920    4635 start.go:365] acquiring machines lock for bridge-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:50.316427    4635 start.go:369] acquired machines lock for "bridge-838000" in 387.333µs
	I0911 04:38:50.316557    4635 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:50.316901    4635 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:50.323725    4635 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:50.368556    4635 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I0911 04:38:50.368591    4635 client.go:168] LocalClient.Create starting
	I0911 04:38:50.368695    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:50.368763    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:50.368778    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:50.368901    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:50.368936    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:50.368953    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:50.369492    4635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:50.497915    4635 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:50.541904    4635 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:50.541913    4635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:50.542049    4635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:50.550572    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:50.550586    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:50.550641    4635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I0911 04:38:50.557734    4635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:50.557748    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:50.557764    4635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:50.557771    4635 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:50.557812    4635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:e3:a6:b3:79:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:50.559298    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:50.559310    4635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:50.559322    4635 client.go:171] LocalClient.Create took 190.727292ms
	I0911 04:38:52.561457    4635 start.go:128] duration metric: createHost completed in 2.24450525s
	I0911 04:38:52.561549    4635 start.go:83] releasing machines lock for "bridge-838000", held for 2.245063959s
	W0911 04:38:52.561872    4635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:52.572449    4635 out.go:177] 
	W0911 04:38:52.576549    4635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:38:52.576564    4635 out.go:239] * 
	* 
	W0911 04:38:52.578004    4635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:38:52.588419    4635 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (3.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe start -p stopped-upgrade-494000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe start -p stopped-upgrade-494000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe: permission denied (7.787459ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe start -p stopped-upgrade-494000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe start -p stopped-upgrade-494000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe: permission denied (7.297834ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe start -p stopped-upgrade-494000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe start -p stopped-upgrade-494000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe: permission denied (1.452125ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2303124502.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (3.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-494000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-494000: exit status 85 (73.999708ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000 sudo cat                | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000 sudo cat                | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000 sudo cat                | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-838000                         | enable-default-cni-838000 | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT | 11 Sep 23 04:38 PDT |
	| start   | -p flannel-838000                                    | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=qemu2                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo crictl                        | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo crictl                        | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo find                          | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo ip a s                        | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	| ssh     | -p flannel-838000 sudo ip r s                        | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | iptables -t nat -L -n -v                             |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /run/flannel/subnet.env                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/kube-flannel/cni-conf.json                      |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo docker                        | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo cat                           | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo                               | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo find                          | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-838000 sudo crio                          | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p flannel-838000                                    | flannel-838000            | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT | 11 Sep 23 04:38 PDT |
	| start   | -p bridge-838000 --memory=3072                       | bridge-838000             | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=qemu2                          |                           |         |         |                     |                     |
	| ssh     | -p bridge-838000 sudo cat                            | bridge-838000             | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p bridge-838000 sudo cat                            | bridge-838000             | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p bridge-838000 sudo cat                            | bridge-838000             | jenkins | v1.31.2 | 11 Sep 23 04:38 PDT |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 04:38:42
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 04:38:42.923261    4635 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:38:42.923367    4635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:42.923370    4635 out.go:309] Setting ErrFile to fd 2...
	I0911 04:38:42.923373    4635 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:42.923489    4635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:38:42.924494    4635 out.go:303] Setting JSON to false
	I0911 04:38:42.939660    4635 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4096,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:38:42.939721    4635 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:38:42.942658    4635 out.go:177] * [bridge-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:38:42.949421    4635 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:38:42.952369    4635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:38:42.949465    4635 notify.go:220] Checking for updates...
	I0911 04:38:42.958377    4635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:38:42.959705    4635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:38:42.962395    4635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:38:42.965364    4635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:38:42.968584    4635 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:38:42.972309    4635 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:38:42.979342    4635 start.go:298] selected driver: qemu2
	I0911 04:38:42.979346    4635 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:38:42.979352    4635 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:38:42.981455    4635 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:38:42.984398    4635 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:38:42.987480    4635 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:38:42.987505    4635 cni.go:84] Creating CNI manager for "bridge"
	I0911 04:38:42.987509    4635 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:38:42.987514    4635 start_flags.go:321] config:
	{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0911 04:38:42.991834    4635 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:38:42.998350    4635 out.go:177] * Starting control plane node bridge-838000 in cluster bridge-838000
	I0911 04:38:43.002309    4635 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:38:43.002327    4635 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:38:43.002342    4635 cache.go:57] Caching tarball of preloaded images
	I0911 04:38:43.002398    4635 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:38:43.002403    4635 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:38:43.002598    4635 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/bridge-838000/config.json ...
	I0911 04:38:43.002611    4635 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/bridge-838000/config.json: {Name:mk5c8da8d870fee774ac69e0ac0d9edf182f1ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:38:43.002787    4635 start.go:365] acquiring machines lock for bridge-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:43.002817    4635 start.go:369] acquired machines lock for "bridge-838000" in 24.333µs
	I0911 04:38:43.002827    4635 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:43.002858    4635 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:43.007396    4635 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:43.022339    4635 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I0911 04:38:43.022372    4635 client.go:168] LocalClient.Create starting
	I0911 04:38:43.022431    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:43.022455    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:43.022468    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:43.022508    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:43.022526    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:43.022532    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:43.022842    4635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:43.142978    4635 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:43.264847    4635 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:43.264853    4635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:43.265007    4635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:43.273300    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:43.273315    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:43.273396    4635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I0911 04:38:43.280562    4635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:43.280577    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:43.280611    4635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:43.280618    4635 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:43.280653    4635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:43:84:b5:6e:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:43.282071    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:43.282083    4635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:43.282103    4635 client.go:171] LocalClient.Create took 259.723875ms
	I0911 04:38:45.284295    4635 start.go:128] duration metric: createHost completed in 2.281409083s
	I0911 04:38:45.284383    4635 start.go:83] releasing machines lock for "bridge-838000", held for 2.281558875s
	W0911 04:38:45.284493    4635 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:45.292970    4635 out.go:177] * Deleting "bridge-838000" in qemu2 ...
	W0911 04:38:45.313659    4635 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:45.313689    4635 start.go:687] Will try again in 5 seconds ...
	I0911 04:38:50.315920    4635 start.go:365] acquiring machines lock for bridge-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:50.316427    4635 start.go:369] acquired machines lock for "bridge-838000" in 387.333µs
	I0911 04:38:50.316557    4635 start.go:93] Provisioning new machine with config: &{Name:bridge-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:50.316901    4635 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:50.323725    4635 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:50.368556    4635 start.go:159] libmachine.API.Create for "bridge-838000" (driver="qemu2")
	I0911 04:38:50.368591    4635 client.go:168] LocalClient.Create starting
	I0911 04:38:50.368695    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:50.368763    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:50.368778    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:50.368901    4635 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:50.368936    4635 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:50.368953    4635 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:50.369492    4635 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:50.497915    4635 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:50.541904    4635 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:50.541913    4635 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:50.542049    4635 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:50.550572    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:50.550586    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:50.550641    4635 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2 +20000M
	I0911 04:38:50.557734    4635 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:50.557748    4635 main.go:141] libmachine: STDERR: 
	I0911 04:38:50.557764    4635 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:50.557771    4635 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:50.557812    4635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:e3:a6:b3:79:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/bridge-838000/disk.qcow2
	I0911 04:38:50.559298    4635 main.go:141] libmachine: STDOUT: 
	I0911 04:38:50.559310    4635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:50.559322    4635 client.go:171] LocalClient.Create took 190.727292ms
	I0911 04:38:52.561457    4635 start.go:128] duration metric: createHost completed in 2.24450525s
	I0911 04:38:52.561549    4635 start.go:83] releasing machines lock for "bridge-838000", held for 2.245063959s
	W0911 04:38:52.561872    4635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:52.572449    4635 out.go:177] 
	W0911 04:38:52.576549    4635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:38:52.576564    4635 out.go:239] * 
	W0911 04:38:52.578004    4635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:38:52.588419    4635 out.go:177] 
	
	* 
	* Profile "stopped-upgrade-494000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-494000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.022557875s)

                                                
                                                
-- stdout --
	* [kubenet-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-838000 in cluster kubenet-838000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:38:53.380212    4703 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:38:53.380317    4703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:53.380320    4703 out.go:309] Setting ErrFile to fd 2...
	I0911 04:38:53.380322    4703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:53.380427    4703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:38:53.384112    4703 out.go:303] Setting JSON to false
	I0911 04:38:53.399924    4703 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4107,"bootTime":1694428226,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:38:53.399996    4703 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:38:53.404874    4703 out.go:177] * [kubenet-838000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:38:53.412949    4703 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:38:53.415881    4703 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:38:53.412973    4703 notify.go:220] Checking for updates...
	I0911 04:38:53.418950    4703 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:38:53.426930    4703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:38:53.437851    4703 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:38:53.444903    4703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:38:53.451073    4703 config.go:182] Loaded profile config "bridge-838000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:38:53.451126    4703 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:38:53.454828    4703 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:38:53.460805    4703 start.go:298] selected driver: qemu2
	I0911 04:38:53.460811    4703 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:38:53.460821    4703 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:38:53.463342    4703 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:38:53.465828    4703 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:38:53.468954    4703 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:38:53.468975    4703 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0911 04:38:53.468979    4703 start_flags.go:321] config:
	{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0911 04:38:53.473316    4703 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:38:53.479872    4703 out.go:177] * Starting control plane node kubenet-838000 in cluster kubenet-838000
	I0911 04:38:53.487697    4703 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:38:53.487712    4703 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:38:53.487729    4703 cache.go:57] Caching tarball of preloaded images
	I0911 04:38:53.487788    4703 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:38:53.487794    4703 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:38:53.487856    4703 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/kubenet-838000/config.json ...
	I0911 04:38:53.487867    4703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/kubenet-838000/config.json: {Name:mkef89bb14b5f9500264c25cc82462af073577b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:38:53.488198    4703 start.go:365] acquiring machines lock for kubenet-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:53.488224    4703 start.go:369] acquired machines lock for "kubenet-838000" in 20.791µs
	I0911 04:38:53.488233    4703 start.go:93] Provisioning new machine with config: &{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:53.488270    4703 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:53.498883    4703 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:38:53.512950    4703 start.go:159] libmachine.API.Create for "kubenet-838000" (driver="qemu2")
	I0911 04:38:53.512992    4703 client.go:168] LocalClient.Create starting
	I0911 04:38:53.513055    4703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:53.513077    4703 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:53.513087    4703 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:53.513129    4703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:53.513145    4703 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:53.513156    4703 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:53.513484    4703 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:53.723474    4703 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:53.903001    4703 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:53.903010    4703 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:53.903208    4703 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2
	I0911 04:38:53.919374    4703 main.go:141] libmachine: STDOUT: 
	I0911 04:38:53.919397    4703 main.go:141] libmachine: STDERR: 
	I0911 04:38:53.919468    4703 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2 +20000M
	I0911 04:38:53.927397    4703 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:53.927415    4703 main.go:141] libmachine: STDERR: 
	I0911 04:38:53.927442    4703 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2
	I0911 04:38:53.927454    4703 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:53.927495    4703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:b6:a0:82:5a:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2
	I0911 04:38:53.929482    4703 main.go:141] libmachine: STDOUT: 
	I0911 04:38:53.929502    4703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:53.929522    4703 client.go:171] LocalClient.Create took 416.52475ms
	I0911 04:38:55.931724    4703 start.go:128] duration metric: createHost completed in 2.443426417s
	I0911 04:38:55.931818    4703 start.go:83] releasing machines lock for "kubenet-838000", held for 2.443587s
	W0911 04:38:55.931886    4703 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:55.945859    4703 out.go:177] * Deleting "kubenet-838000" in qemu2 ...
	W0911 04:38:55.961372    4703 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:55.961397    4703 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:00.963572    4703 start.go:365] acquiring machines lock for kubenet-838000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:00.963979    4703 start.go:369] acquired machines lock for "kubenet-838000" in 303.792µs
	I0911 04:39:00.964136    4703 start.go:93] Provisioning new machine with config: &{Name:kubenet-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-838000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:00.964489    4703 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:00.971902    4703 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:39:01.015309    4703 start.go:159] libmachine.API.Create for "kubenet-838000" (driver="qemu2")
	I0911 04:39:01.015354    4703 client.go:168] LocalClient.Create starting
	I0911 04:39:01.015460    4703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:01.015522    4703 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:01.015538    4703 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:01.015658    4703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:01.015692    4703 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:01.015712    4703 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:01.016205    4703 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:01.154587    4703 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:01.313469    4703 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:01.313478    4703 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:01.313628    4703 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2
	I0911 04:39:01.322234    4703 main.go:141] libmachine: STDOUT: 
	I0911 04:39:01.322250    4703 main.go:141] libmachine: STDERR: 
	I0911 04:39:01.322313    4703 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2 +20000M
	I0911 04:39:01.329564    4703 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:01.329576    4703 main.go:141] libmachine: STDERR: 
	I0911 04:39:01.329593    4703 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2
	I0911 04:39:01.329605    4703 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:01.329645    4703 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:92:b7:f7:21:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/kubenet-838000/disk.qcow2
	I0911 04:39:01.331123    4703 main.go:141] libmachine: STDOUT: 
	I0911 04:39:01.331134    4703 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:01.331146    4703 client.go:171] LocalClient.Create took 315.784833ms
	I0911 04:39:03.333317    4703 start.go:128] duration metric: createHost completed in 2.368812417s
	I0911 04:39:03.333368    4703 start.go:83] releasing machines lock for "kubenet-838000", held for 2.369347875s
	W0911 04:39:03.333742    4703 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:03.347425    4703 out.go:177] 
	W0911 04:39:03.352511    4703 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:03.352547    4703 out.go:239] * 
	* 
	W0911 04:39:03.354460    4703 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:03.363311    4703 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (10.883548042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-011000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-011000 in cluster old-k8s-version-011000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-011000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:38:54.888385    4779 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:38:54.888493    4779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:54.888496    4779 out.go:309] Setting ErrFile to fd 2...
	I0911 04:38:54.888498    4779 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:38:54.888615    4779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:38:54.889655    4779 out.go:303] Setting JSON to false
	I0911 04:38:54.904636    4779 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4108,"bootTime":1694428226,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:38:54.904703    4779 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:38:54.910144    4779 out.go:177] * [old-k8s-version-011000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:38:54.917175    4779 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:38:54.920970    4779 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:38:54.917241    4779 notify.go:220] Checking for updates...
	I0911 04:38:54.927111    4779 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:38:54.930094    4779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:38:54.933535    4779 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:38:54.938112    4779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:38:54.941328    4779 config.go:182] Loaded profile config "kubenet-838000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:38:54.941377    4779 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:38:54.945105    4779 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:38:54.951127    4779 start.go:298] selected driver: qemu2
	I0911 04:38:54.951135    4779 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:38:54.951153    4779 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:38:54.953193    4779 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:38:54.956075    4779 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:38:54.959187    4779 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:38:54.959206    4779 cni.go:84] Creating CNI manager for ""
	I0911 04:38:54.959211    4779 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:38:54.959215    4779 start_flags.go:321] config:
	{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:38:54.963363    4779 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:38:54.970077    4779 out.go:177] * Starting control plane node old-k8s-version-011000 in cluster old-k8s-version-011000
	I0911 04:38:54.974154    4779 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 04:38:54.974173    4779 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 04:38:54.974189    4779 cache.go:57] Caching tarball of preloaded images
	I0911 04:38:54.974243    4779 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:38:54.974250    4779 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 04:38:54.974320    4779 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/old-k8s-version-011000/config.json ...
	I0911 04:38:54.974332    4779 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/old-k8s-version-011000/config.json: {Name:mka8c0f17bf54ba78a0f66a48e93d2791b4a13e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:38:54.974531    4779 start.go:365] acquiring machines lock for old-k8s-version-011000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:38:55.931973    4779 start.go:369] acquired machines lock for "old-k8s-version-011000" in 957.389417ms
	I0911 04:38:55.932138    4779 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:38:55.932419    4779 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:38:55.938930    4779 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:38:55.984866    4779 start.go:159] libmachine.API.Create for "old-k8s-version-011000" (driver="qemu2")
	I0911 04:38:55.984917    4779 client.go:168] LocalClient.Create starting
	I0911 04:38:55.985041    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:38:55.985096    4779 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:55.985115    4779 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:55.985191    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:38:55.985227    4779 main.go:141] libmachine: Decoding PEM data...
	I0911 04:38:55.985242    4779 main.go:141] libmachine: Parsing certificate...
	I0911 04:38:55.985894    4779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:38:56.112357    4779 main.go:141] libmachine: Creating SSH key...
	I0911 04:38:56.159136    4779 main.go:141] libmachine: Creating Disk image...
	I0911 04:38:56.159143    4779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:38:56.159289    4779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:38:56.167776    4779 main.go:141] libmachine: STDOUT: 
	I0911 04:38:56.167787    4779 main.go:141] libmachine: STDERR: 
	I0911 04:38:56.167832    4779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2 +20000M
	I0911 04:38:56.174923    4779 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:38:56.174934    4779 main.go:141] libmachine: STDERR: 
	I0911 04:38:56.174945    4779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:38:56.174950    4779 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:38:56.175002    4779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:d3:9d:b9:97:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:38:56.176505    4779 main.go:141] libmachine: STDOUT: 
	I0911 04:38:56.176514    4779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:38:56.176532    4779 client.go:171] LocalClient.Create took 191.608208ms
	I0911 04:38:58.178694    4779 start.go:128] duration metric: createHost completed in 2.246250667s
	I0911 04:38:58.178758    4779 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 2.246750958s
	W0911 04:38:58.178848    4779 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:58.187368    4779 out.go:177] * Deleting "old-k8s-version-011000" in qemu2 ...
	W0911 04:38:58.209739    4779 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:38:58.209768    4779 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:03.212012    4779 start.go:365] acquiring machines lock for old-k8s-version-011000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:03.333505    4779 start.go:369] acquired machines lock for "old-k8s-version-011000" in 121.316792ms
	I0911 04:39:03.333653    4779 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:03.333895    4779 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:03.343477    4779 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:03.389768    4779 start.go:159] libmachine.API.Create for "old-k8s-version-011000" (driver="qemu2")
	I0911 04:39:03.389811    4779 client.go:168] LocalClient.Create starting
	I0911 04:39:03.389966    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:03.390018    4779 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:03.390040    4779 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:03.390107    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:03.390139    4779 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:03.390156    4779 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:03.390696    4779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:03.522973    4779 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:03.687562    4779 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:03.687572    4779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:03.687751    4779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:39:03.696659    4779 main.go:141] libmachine: STDOUT: 
	I0911 04:39:03.696677    4779 main.go:141] libmachine: STDERR: 
	I0911 04:39:03.696742    4779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2 +20000M
	I0911 04:39:03.704704    4779 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:03.704728    4779 main.go:141] libmachine: STDERR: 
	I0911 04:39:03.704749    4779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:39:03.704755    4779 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:03.704792    4779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:da:4a:81:f7:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:39:03.706379    4779 main.go:141] libmachine: STDOUT: 
	I0911 04:39:03.706397    4779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:03.706412    4779 client.go:171] LocalClient.Create took 316.585541ms
	I0911 04:39:05.708462    4779 start.go:128] duration metric: createHost completed in 2.374551958s
	I0911 04:39:05.708483    4779 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 2.37496s
	W0911 04:39:05.708598    4779 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:05.716889    4779 out.go:177] 
	W0911 04:39:05.724841    4779 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:05.724854    4779 out.go:239] * 
	* 
	W0911 04:39:05.725341    4779 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:05.736805    4779 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (34.796709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (10.19374125s)

                                                
                                                
-- stdout --
	* [no-preload-581000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-581000 in cluster no-preload-581000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-581000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:05.538010    4893 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:05.538129    4893 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:05.538132    4893 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:05.538134    4893 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:05.538247    4893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:05.539260    4893 out.go:303] Setting JSON to false
	I0911 04:39:05.554381    4893 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4119,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:05.554441    4893 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:05.557371    4893 out.go:177] * [no-preload-581000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:05.564851    4893 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:05.568765    4893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:05.564856    4893 notify.go:220] Checking for updates...
	I0911 04:39:05.571861    4893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:05.574783    4893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:05.577655    4893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:05.580766    4893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:05.584179    4893 config.go:182] Loaded profile config "old-k8s-version-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:39:05.584230    4893 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:05.587680    4893 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:39:05.594764    4893 start.go:298] selected driver: qemu2
	I0911 04:39:05.594770    4893 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:39:05.594777    4893 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:05.596784    4893 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:39:05.598331    4893 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:39:05.601830    4893 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:39:05.601853    4893 cni.go:84] Creating CNI manager for ""
	I0911 04:39:05.601860    4893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:05.601863    4893 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:39:05.601870    4893 start_flags.go:321] config:
	{Name:no-preload-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:05.605753    4893 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.612791    4893 out.go:177] * Starting control plane node no-preload-581000 in cluster no-preload-581000
	I0911 04:39:05.616804    4893 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:05.616885    4893 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/no-preload-581000/config.json ...
	I0911 04:39:05.616914    4893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/no-preload-581000/config.json: {Name:mk606fcb4833a65318bf301a04a3b9aab03e5464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:39:05.616926    4893 cache.go:107] acquiring lock: {Name:mka16b08b08162019ebcf8baf85ee0a972ec736d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.616931    4893 cache.go:107] acquiring lock: {Name:mkd9647b19f39b4355857af9d0132f5adc68bf0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.616935    4893 cache.go:107] acquiring lock: {Name:mk8bdfcc11af336f1c1f2c840abc75a4bd8805a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.616983    4893 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 04:39:05.616990    4893 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 65.541µs
	I0911 04:39:05.617001    4893 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 04:39:05.617007    4893 cache.go:107] acquiring lock: {Name:mkf6b28353814d47f64a45c5787e09c5b20e3de3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.617066    4893 cache.go:107] acquiring lock: {Name:mkd50a7370aba572340aea0670bf0a054a4f42a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.617077    4893 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 04:39:05.617103    4893 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0911 04:39:05.617113    4893 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 04:39:05.617149    4893 start.go:365] acquiring machines lock for no-preload-581000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:05.617177    4893 cache.go:107] acquiring lock: {Name:mka3ed3a15858dc8376829b696fa6533d0f8db2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.617213    4893 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 04:39:05.617227    4893 cache.go:107] acquiring lock: {Name:mkb5d0603bae1665fee1df78c2dee6ddeb85a542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.617277    4893 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 04:39:05.617266    4893 cache.go:107] acquiring lock: {Name:mkff4182de539079bf183cb111e542836d9c0a3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:05.617324    4893 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0911 04:39:05.617366    4893 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 04:39:05.623488    4893 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 04:39:05.623544    4893 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0911 04:39:05.623618    4893 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 04:39:05.624186    4893 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 04:39:05.624220    4893 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0911 04:39:05.624302    4893 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 04:39:05.624310    4893 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 04:39:05.708624    4893 start.go:369] acquired machines lock for "no-preload-581000" in 91.462917ms
	I0911 04:39:05.708664    4893 start.go:93] Provisioning new machine with config: &{Name:no-preload-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:05.708780    4893 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:05.720775    4893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:05.734986    4893 start.go:159] libmachine.API.Create for "no-preload-581000" (driver="qemu2")
	I0911 04:39:05.735014    4893 client.go:168] LocalClient.Create starting
	I0911 04:39:05.735083    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:05.735106    4893 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:05.735118    4893 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:05.735162    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:05.735180    4893 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:05.735195    4893 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:05.741220    4893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:05.953232    4893 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:06.078067    4893 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:06.078082    4893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:06.078345    4893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:06.087841    4893 main.go:141] libmachine: STDOUT: 
	I0911 04:39:06.087877    4893 main.go:141] libmachine: STDERR: 
	I0911 04:39:06.087927    4893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2 +20000M
	I0911 04:39:06.096370    4893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:06.096419    4893 main.go:141] libmachine: STDERR: 
	I0911 04:39:06.096438    4893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:06.096446    4893 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:06.096509    4893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:f0:67:d6:f2:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:06.098127    4893 main.go:141] libmachine: STDOUT: 
	I0911 04:39:06.098142    4893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:06.098167    4893 client.go:171] LocalClient.Create took 363.143ms
	I0911 04:39:06.197434    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0911 04:39:06.243289    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0911 04:39:06.364399    4893 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0911 04:39:06.364415    4893 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 747.408875ms
	I0911 04:39:06.364421    4893 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0911 04:39:06.437481    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1
	I0911 04:39:06.669537    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1
	I0911 04:39:06.856065    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0911 04:39:07.043252    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1
	I0911 04:39:07.293718    4893 cache.go:162] opening:  /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0911 04:39:08.098369    4893 start.go:128] duration metric: createHost completed in 2.389555709s
	I0911 04:39:08.098421    4893 start.go:83] releasing machines lock for "no-preload-581000", held for 2.389779209s
	W0911 04:39:08.098484    4893 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:08.113090    4893 out.go:177] * Deleting "no-preload-581000" in qemu2 ...
	W0911 04:39:08.139446    4893 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:08.139482    4893 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:08.269114    4893 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0911 04:39:08.269184    4893 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 2.65199325s
	I0911 04:39:08.269224    4893 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0911 04:39:10.219213    4893 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0911 04:39:10.219283    4893 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 4.602351417s
	I0911 04:39:10.219330    4893 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0911 04:39:10.482474    4893 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0911 04:39:10.482557    4893 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 4.865626042s
	I0911 04:39:10.482589    4893 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0911 04:39:10.674192    4893 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0911 04:39:10.674244    4893 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 5.057080459s
	I0911 04:39:10.674279    4893 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0911 04:39:11.755993    4893 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0911 04:39:11.756047    4893 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 6.138991041s
	I0911 04:39:11.756079    4893 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0911 04:39:13.147610    4893 start.go:365] acquiring machines lock for no-preload-581000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:13.156171    4893 start.go:369] acquired machines lock for "no-preload-581000" in 8.495083ms
	I0911 04:39:13.156221    4893 start.go:93] Provisioning new machine with config: &{Name:no-preload-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:13.156471    4893 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:13.163363    4893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:13.206843    4893 start.go:159] libmachine.API.Create for "no-preload-581000" (driver="qemu2")
	I0911 04:39:13.206914    4893 client.go:168] LocalClient.Create starting
	I0911 04:39:13.207006    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:13.207067    4893 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:13.207090    4893 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:13.207162    4893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:13.207195    4893 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:13.207209    4893 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:13.207668    4893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:13.342812    4893 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:13.646933    4893 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:13.646942    4893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:13.647085    4893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:13.655779    4893 main.go:141] libmachine: STDOUT: 
	I0911 04:39:13.655801    4893 main.go:141] libmachine: STDERR: 
	I0911 04:39:13.655870    4893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2 +20000M
	I0911 04:39:13.663921    4893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:13.663938    4893 main.go:141] libmachine: STDERR: 
	I0911 04:39:13.663960    4893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:13.663971    4893 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:13.664014    4893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:5b:3c:12:5e:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:13.665721    4893 main.go:141] libmachine: STDOUT: 
	I0911 04:39:13.665736    4893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:13.665753    4893 client.go:171] LocalClient.Create took 458.834333ms
	I0911 04:39:14.385662    4893 cache.go:157] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0911 04:39:14.385744    4893 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 8.768533917s
	I0911 04:39:14.385769    4893 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0911 04:39:14.385805    4893 cache.go:87] Successfully saved all images to host disk.
	I0911 04:39:15.667171    4893 start.go:128] duration metric: createHost completed in 2.510597042s
	I0911 04:39:15.667263    4893 start.go:83] releasing machines lock for "no-preload-581000", held for 2.511071083s
	W0911 04:39:15.667617    4893 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-581000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:15.681059    4893 out.go:177] 
	W0911 04:39:15.684024    4893 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:15.684067    4893 out.go:239] * 
	* 
	W0911 04:39:15.686787    4893 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:15.695023    4893 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (50.190209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-011000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-011000 create -f testdata/busybox.yaml: exit status 1 (28.926583ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-011000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (79.878916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (32.042458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-011000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-011000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-011000 describe deploy/metrics-server -n kube-system: exit status 1 (28.845666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-011000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (29.406458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (6.941527583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-011000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-011000 in cluster old-k8s-version-011000
	* Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:06.277377    4963 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:06.277494    4963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:06.277497    4963 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:06.277500    4963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:06.277619    4963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:06.278636    4963 out.go:303] Setting JSON to false
	I0911 04:39:06.294442    4963 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4120,"bootTime":1694428226,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:06.294506    4963 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:06.298882    4963 out.go:177] * [old-k8s-version-011000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:06.305820    4963 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:06.305928    4963 notify.go:220] Checking for updates...
	I0911 04:39:06.311761    4963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:06.315873    4963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:06.318806    4963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:06.322781    4963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:06.326808    4963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:06.328223    4963 config.go:182] Loaded profile config "old-k8s-version-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:39:06.332761    4963 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0911 04:39:06.335848    4963 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:06.338704    4963 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:39:06.345774    4963 start.go:298] selected driver: qemu2
	I0911 04:39:06.345781    4963 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-011000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:06.345844    4963 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:06.347804    4963 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:39:06.347834    4963 cni.go:84] Creating CNI manager for ""
	I0911 04:39:06.347840    4963 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:39:06.347845    4963 start_flags.go:321] config:
	{Name:old-k8s-version-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-011000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:06.351457    4963 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:06.359842    4963 out.go:177] * Starting control plane node old-k8s-version-011000 in cluster old-k8s-version-011000
	I0911 04:39:06.363808    4963 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 04:39:06.363846    4963 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 04:39:06.363864    4963 cache.go:57] Caching tarball of preloaded images
	I0911 04:39:06.363947    4963 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:39:06.363953    4963 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 04:39:06.364024    4963 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/old-k8s-version-011000/config.json ...
	I0911 04:39:06.364272    4963 start.go:365] acquiring machines lock for old-k8s-version-011000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:08.098605    4963 start.go:369] acquired machines lock for "old-k8s-version-011000" in 1.734305375s
	I0911 04:39:08.098777    4963 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:08.098822    4963 fix.go:54] fixHost starting: 
	I0911 04:39:08.099578    4963 fix.go:102] recreateIfNeeded on old-k8s-version-011000: state=Stopped err=<nil>
	W0911 04:39:08.099619    4963 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:08.109266    4963 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	I0911 04:39:08.119573    4963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:da:4a:81:f7:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:39:08.129581    4963 main.go:141] libmachine: STDOUT: 
	I0911 04:39:08.129684    4963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:08.129842    4963 fix.go:56] fixHost completed within 31.011292ms
	I0911 04:39:08.129871    4963 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 31.231292ms
	W0911 04:39:08.129916    4963 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:08.130131    4963 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:08.130170    4963 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:13.132344    4963 start.go:365] acquiring machines lock for old-k8s-version-011000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:13.132788    4963 start.go:369] acquired machines lock for "old-k8s-version-011000" in 364.292µs
	I0911 04:39:13.132947    4963 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:13.132966    4963 fix.go:54] fixHost starting: 
	I0911 04:39:13.133782    4963 fix.go:102] recreateIfNeeded on old-k8s-version-011000: state=Stopped err=<nil>
	W0911 04:39:13.133811    4963 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:13.138573    4963 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-011000" ...
	I0911 04:39:13.146497    4963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:da:4a:81:f7:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/old-k8s-version-011000/disk.qcow2
	I0911 04:39:13.155920    4963 main.go:141] libmachine: STDOUT: 
	I0911 04:39:13.155983    4963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:13.156070    4963 fix.go:56] fixHost completed within 23.105375ms
	I0911 04:39:13.156091    4963 start.go:83] releasing machines lock for "old-k8s-version-011000", held for 23.28225ms
	W0911 04:39:13.156375    4963 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:13.166220    4963 out.go:177] 
	W0911 04:39:13.170474    4963 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:13.170541    4963 out.go:239] * 
	* 
	W0911 04:39:13.172590    4963 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:13.182356    4963 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-011000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (47.776334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (6.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-011000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (32.8905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-011000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.344291ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (32.558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-011000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-011000 "sudo crictl images -o json": exit status 89 (157.347625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-011000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-011000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-011000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (28.915375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-011000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-011000 --alsologtostderr -v=1: exit status 89 (40.471042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-011000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:13.550396    5039 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:13.550755    5039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:13.550758    5039 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:13.550761    5039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:13.550885    5039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:13.551070    5039 out.go:303] Setting JSON to false
	I0911 04:39:13.551079    5039 mustload.go:65] Loading cluster: old-k8s-version-011000
	I0911 04:39:13.551239    5039 config.go:182] Loaded profile config "old-k8s-version-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:39:13.555301    5039 out.go:177] * The control plane node must be running for this command
	I0911 04:39:13.559406    5039 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-011000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-011000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (28.868125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (28.553125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-151000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-151000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.7552405s)

                                                
                                                
-- stdout --
	* [embed-certs-151000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-151000 in cluster embed-certs-151000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:14.010077    5065 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:14.010194    5065 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:14.010197    5065 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:14.010199    5065 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:14.010317    5065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:14.011340    5065 out.go:303] Setting JSON to false
	I0911 04:39:14.026130    5065 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4128,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:14.026185    5065 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:14.031336    5065 out.go:177] * [embed-certs-151000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:14.041345    5065 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:14.037398    5065 notify.go:220] Checking for updates...
	I0911 04:39:14.049283    5065 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:14.056208    5065 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:14.064295    5065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:14.072314    5065 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:14.080314    5065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:14.084590    5065 config.go:182] Loaded profile config "no-preload-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:14.084642    5065 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:14.087348    5065 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:39:14.094357    5065 start.go:298] selected driver: qemu2
	I0911 04:39:14.094362    5065 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:39:14.094367    5065 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:14.096354    5065 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:39:14.100107    5065 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:39:14.104450    5065 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:39:14.104485    5065 cni.go:84] Creating CNI manager for ""
	I0911 04:39:14.104491    5065 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:14.104495    5065 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:39:14.104501    5065 start_flags.go:321] config:
	{Name:embed-certs-151000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:14.108589    5065 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:14.116315    5065 out.go:177] * Starting control plane node embed-certs-151000 in cluster embed-certs-151000
	I0911 04:39:14.120294    5065 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:14.120319    5065 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:39:14.120333    5065 cache.go:57] Caching tarball of preloaded images
	I0911 04:39:14.120392    5065 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:39:14.120397    5065 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:39:14.120463    5065 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/embed-certs-151000/config.json ...
	I0911 04:39:14.120475    5065 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/embed-certs-151000/config.json: {Name:mk64ebdd71df7f0c185c6b9328313faae0a56c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:39:14.120682    5065 start.go:365] acquiring machines lock for embed-certs-151000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:15.667436    5065 start.go:369] acquired machines lock for "embed-certs-151000" in 1.546674291s
	I0911 04:39:15.667643    5065 start.go:93] Provisioning new machine with config: &{Name:embed-certs-151000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:15.667845    5065 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:15.677028    5065 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:15.723917    5065 start.go:159] libmachine.API.Create for "embed-certs-151000" (driver="qemu2")
	I0911 04:39:15.723966    5065 client.go:168] LocalClient.Create starting
	I0911 04:39:15.724064    5065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:15.724114    5065 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:15.724138    5065 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:15.724199    5065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:15.724238    5065 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:15.724255    5065 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:15.724847    5065 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:15.950055    5065 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:16.090216    5065 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:16.090226    5065 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:16.090396    5065 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:16.099490    5065 main.go:141] libmachine: STDOUT: 
	I0911 04:39:16.099517    5065 main.go:141] libmachine: STDERR: 
	I0911 04:39:16.099612    5065 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2 +20000M
	I0911 04:39:16.107961    5065 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:16.107988    5065 main.go:141] libmachine: STDERR: 
	I0911 04:39:16.108010    5065 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:16.108016    5065 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:16.108064    5065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:70:7d:8b:6f:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:16.109689    5065 main.go:141] libmachine: STDOUT: 
	I0911 04:39:16.109701    5065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:16.109723    5065 client.go:171] LocalClient.Create took 385.749709ms
	I0911 04:39:18.111899    5065 start.go:128] duration metric: createHost completed in 2.444024917s
	I0911 04:39:18.111964    5065 start.go:83] releasing machines lock for "embed-certs-151000", held for 2.444495958s
	W0911 04:39:18.112029    5065 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:18.130557    5065 out.go:177] * Deleting "embed-certs-151000" in qemu2 ...
	W0911 04:39:18.154427    5065 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:18.154471    5065 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:23.156588    5065 start.go:365] acquiring machines lock for embed-certs-151000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:23.170774    5065 start.go:369] acquired machines lock for "embed-certs-151000" in 14.103542ms
	I0911 04:39:23.170818    5065 start.go:93] Provisioning new machine with config: &{Name:embed-certs-151000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:23.171061    5065 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:23.183535    5065 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:23.225983    5065 start.go:159] libmachine.API.Create for "embed-certs-151000" (driver="qemu2")
	I0911 04:39:23.226040    5065 client.go:168] LocalClient.Create starting
	I0911 04:39:23.226163    5065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:23.226224    5065 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:23.226249    5065 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:23.226323    5065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:23.226359    5065 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:23.226382    5065 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:23.226899    5065 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:23.358494    5065 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:23.678962    5065 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:23.678973    5065 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:23.679139    5065 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:23.688141    5065 main.go:141] libmachine: STDOUT: 
	I0911 04:39:23.688167    5065 main.go:141] libmachine: STDERR: 
	I0911 04:39:23.688236    5065 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2 +20000M
	I0911 04:39:23.696211    5065 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:23.696229    5065 main.go:141] libmachine: STDERR: 
	I0911 04:39:23.696251    5065 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:23.696262    5065 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:23.696320    5065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:86:6a:b5:dc:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:23.697923    5065 main.go:141] libmachine: STDOUT: 
	I0911 04:39:23.697936    5065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:23.697950    5065 client.go:171] LocalClient.Create took 471.906375ms
	I0911 04:39:25.700308    5065 start.go:128] duration metric: createHost completed in 2.5291745s
	I0911 04:39:25.700417    5065 start.go:83] releasing machines lock for "embed-certs-151000", held for 2.529622166s
	W0911 04:39:25.700822    5065 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:25.714434    5065 out.go:177] 
	W0911 04:39:25.718333    5065 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:25.718367    5065 out.go:239] * 
	* 
	W0911 04:39:25.720506    5065 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:25.729404    5065 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-151000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (49.070167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-581000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-581000 create -f testdata/busybox.yaml: exit status 1 (30.694667ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-581000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (116.259875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (37.052291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-581000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-581000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-581000 describe deploy/metrics-server -n kube-system: exit status 1 (26.841166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-581000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-581000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (29.029542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (6.975506042s)

                                                
                                                
-- stdout --
	* [no-preload-581000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-581000 in cluster no-preload-581000
	* Restarting existing qemu2 VM for "no-preload-581000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-581000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:16.262816    5093 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:16.262928    5093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:16.262931    5093 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:16.262934    5093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:16.263053    5093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:16.264016    5093 out.go:303] Setting JSON to false
	I0911 04:39:16.278795    5093 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4130,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:16.278873    5093 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:16.283904    5093 out.go:177] * [no-preload-581000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:16.290852    5093 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:16.290890    5093 notify.go:220] Checking for updates...
	I0911 04:39:16.294817    5093 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:16.297836    5093 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:16.300825    5093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:16.303836    5093 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:16.305203    5093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:16.308143    5093 config.go:182] Loaded profile config "no-preload-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:16.308410    5093 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:16.312852    5093 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:39:16.318736    5093 start.go:298] selected driver: qemu2
	I0911 04:39:16.318741    5093 start.go:902] validating driver "qemu2" against &{Name:no-preload-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-581000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:16.318790    5093 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:16.320731    5093 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:39:16.320760    5093 cni.go:84] Creating CNI manager for ""
	I0911 04:39:16.320766    5093 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:16.320772    5093 start_flags.go:321] config:
	{Name:no-preload-581000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-581000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:16.324596    5093 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.331707    5093 out.go:177] * Starting control plane node no-preload-581000 in cluster no-preload-581000
	I0911 04:39:16.335842    5093 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:16.335931    5093 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/no-preload-581000/config.json ...
	I0911 04:39:16.335967    5093 cache.go:107] acquiring lock: {Name:mka16b08b08162019ebcf8baf85ee0a972ec736d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.335979    5093 cache.go:107] acquiring lock: {Name:mkd9647b19f39b4355857af9d0132f5adc68bf0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.335981    5093 cache.go:107] acquiring lock: {Name:mkd50a7370aba572340aea0670bf0a054a4f42a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.336029    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 04:39:16.336034    5093 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.708µs
	I0911 04:39:16.336041    5093 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 04:39:16.336051    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0911 04:39:16.336052    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0911 04:39:16.335970    5093 cache.go:107] acquiring lock: {Name:mka3ed3a15858dc8376829b696fa6533d0f8db2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.336056    5093 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 89.708µs
	I0911 04:39:16.336059    5093 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 77.375µs
	I0911 04:39:16.336064    5093 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0911 04:39:16.336061    5093 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0911 04:39:16.336049    5093 cache.go:107] acquiring lock: {Name:mkff4182de539079bf183cb111e542836d9c0a3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.336069    5093 cache.go:107] acquiring lock: {Name:mk8bdfcc11af336f1c1f2c840abc75a4bd8805a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.336089    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0911 04:39:16.336092    5093 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 124.792µs
	I0911 04:39:16.336097    5093 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0911 04:39:16.336103    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0911 04:39:16.336107    5093 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 58.542µs
	I0911 04:39:16.336114    5093 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0911 04:39:16.336111    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0911 04:39:16.336119    5093 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 50.916µs
	I0911 04:39:16.336123    5093 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0911 04:39:16.336326    5093 cache.go:107] acquiring lock: {Name:mkb5d0603bae1665fee1df78c2dee6ddeb85a542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.336423    5093 start.go:365] acquiring machines lock for no-preload-581000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:16.336488    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0911 04:39:16.336516    5093 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 440.875µs
	I0911 04:39:16.336527    5093 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0911 04:39:16.336656    5093 cache.go:107] acquiring lock: {Name:mkf6b28353814d47f64a45c5787e09c5b20e3de3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:16.336837    5093 cache.go:115] /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0911 04:39:16.336897    5093 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 791.792µs
	I0911 04:39:16.336918    5093 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0911 04:39:16.336926    5093 cache.go:87] Successfully saved all images to host disk.
	I0911 04:39:18.112237    5093 start.go:369] acquired machines lock for "no-preload-581000" in 1.775632959s
	I0911 04:39:18.112311    5093 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:18.112341    5093 fix.go:54] fixHost starting: 
	I0911 04:39:18.112963    5093 fix.go:102] recreateIfNeeded on no-preload-581000: state=Stopped err=<nil>
	W0911 04:39:18.113012    5093 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:18.122546    5093 out.go:177] * Restarting existing qemu2 VM for "no-preload-581000" ...
	I0911 04:39:18.134647    5093 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:5b:3c:12:5e:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:18.145244    5093 main.go:141] libmachine: STDOUT: 
	I0911 04:39:18.145305    5093 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:18.145434    5093 fix.go:56] fixHost completed within 33.090375ms
	I0911 04:39:18.145450    5093 start.go:83] releasing machines lock for "no-preload-581000", held for 33.176583ms
	W0911 04:39:18.145483    5093 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:18.145707    5093 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:18.145724    5093 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:23.147353    5093 start.go:365] acquiring machines lock for no-preload-581000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:23.147901    5093 start.go:369] acquired machines lock for "no-preload-581000" in 374.875µs
	I0911 04:39:23.148063    5093 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:23.148083    5093 fix.go:54] fixHost starting: 
	I0911 04:39:23.148894    5093 fix.go:102] recreateIfNeeded on no-preload-581000: state=Stopped err=<nil>
	W0911 04:39:23.148919    5093 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:23.153771    5093 out.go:177] * Restarting existing qemu2 VM for "no-preload-581000" ...
	I0911 04:39:23.161732    5093 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:5b:3c:12:5e:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/no-preload-581000/disk.qcow2
	I0911 04:39:23.170533    5093 main.go:141] libmachine: STDOUT: 
	I0911 04:39:23.170584    5093 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:23.170681    5093 fix.go:56] fixHost completed within 22.598916ms
	I0911 04:39:23.170698    5093 start.go:83] releasing machines lock for "no-preload-581000", held for 22.764583ms
	W0911 04:39:23.170875    5093 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-581000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-581000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:23.187408    5093 out.go:177] 
	W0911 04:39:23.191676    5093 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:23.191703    5093 out.go:239] * 
	* 
	W0911 04:39:23.193310    5093 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:23.202557    5093 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (48.090292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-581000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (33.52225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-581000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-581000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-581000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.499083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-581000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-581000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (33.547333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-581000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-581000 "sudo crictl images -o json": exit status 89 (145.823625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-581000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-581000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-581000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (28.521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-581000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-581000 --alsologtostderr -v=1: exit status 89 (41.047667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-581000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:23.561802    5116 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:23.561939    5116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:23.561941    5116 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:23.561944    5116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:23.562050    5116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:23.562274    5116 out.go:303] Setting JSON to false
	I0911 04:39:23.562283    5116 mustload.go:65] Loading cluster: no-preload-581000
	I0911 04:39:23.562503    5116 config.go:182] Loaded profile config "no-preload-581000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:23.566482    5116 out.go:177] * The control plane node must be running for this command
	I0911 04:39:23.570526    5116 out.go:177]   To start a cluster, run: "minikube start -p no-preload-581000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-581000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (29.604125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (28.871709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-581000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-775000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-775000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.395715625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-775000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-775000 in cluster default-k8s-diff-port-775000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:24.263019    5154 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:24.263128    5154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:24.263131    5154 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:24.263133    5154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:24.263236    5154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:24.264229    5154 out.go:303] Setting JSON to false
	I0911 04:39:24.279122    5154 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4138,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:24.279179    5154 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:24.283911    5154 out.go:177] * [default-k8s-diff-port-775000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:24.290881    5154 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:24.290898    5154 notify.go:220] Checking for updates...
	I0911 04:39:24.294826    5154 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:24.297879    5154 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:24.300879    5154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:24.303888    5154 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:24.305242    5154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:24.308136    5154 config.go:182] Loaded profile config "embed-certs-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:24.308178    5154 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:24.312900    5154 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:39:24.317806    5154 start.go:298] selected driver: qemu2
	I0911 04:39:24.317811    5154 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:39:24.317817    5154 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:24.319680    5154 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:39:24.322861    5154 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:39:24.325982    5154 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:39:24.326010    5154 cni.go:84] Creating CNI manager for ""
	I0911 04:39:24.326018    5154 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:24.326026    5154 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:39:24.326034    5154 start_flags.go:321] config:
	{Name:default-k8s-diff-port-775000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:24.330001    5154 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:24.336837    5154 out.go:177] * Starting control plane node default-k8s-diff-port-775000 in cluster default-k8s-diff-port-775000
	I0911 04:39:24.340872    5154 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:24.340891    5154 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:39:24.340908    5154 cache.go:57] Caching tarball of preloaded images
	I0911 04:39:24.340968    5154 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:39:24.340974    5154 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:39:24.341038    5154 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/default-k8s-diff-port-775000/config.json ...
	I0911 04:39:24.341049    5154 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/default-k8s-diff-port-775000/config.json: {Name:mk27f0d5b3afd0e2382c5628917577633283b97f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:39:24.341247    5154 start.go:365] acquiring machines lock for default-k8s-diff-port-775000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:25.700564    5154 start.go:369] acquired machines lock for "default-k8s-diff-port-775000" in 1.359274208s
	I0911 04:39:25.700791    5154 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:25.701033    5154 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:25.710386    5154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:25.755299    5154 start.go:159] libmachine.API.Create for "default-k8s-diff-port-775000" (driver="qemu2")
	I0911 04:39:25.755358    5154 client.go:168] LocalClient.Create starting
	I0911 04:39:25.755453    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:25.755510    5154 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:25.755536    5154 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:25.755604    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:25.755645    5154 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:25.755657    5154 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:25.756239    5154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:26.034429    5154 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:26.145894    5154 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:26.145902    5154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:26.146040    5154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:26.162657    5154 main.go:141] libmachine: STDOUT: 
	I0911 04:39:26.162681    5154 main.go:141] libmachine: STDERR: 
	I0911 04:39:26.162750    5154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2 +20000M
	I0911 04:39:26.175996    5154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:26.176028    5154 main.go:141] libmachine: STDERR: 
	I0911 04:39:26.176053    5154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:26.176064    5154 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:26.176106    5154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3b:26:b2:a2:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:26.177877    5154 main.go:141] libmachine: STDOUT: 
	I0911 04:39:26.177893    5154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:26.177915    5154 client.go:171] LocalClient.Create took 422.549875ms
	I0911 04:39:28.180097    5154 start.go:128] duration metric: createHost completed in 2.479035958s
	I0911 04:39:28.180175    5154 start.go:83] releasing machines lock for "default-k8s-diff-port-775000", held for 2.4795615s
	W0911 04:39:28.180270    5154 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:28.198860    5154 out.go:177] * Deleting "default-k8s-diff-port-775000" in qemu2 ...
	W0911 04:39:28.219826    5154 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:28.219853    5154 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:33.222049    5154 start.go:365] acquiring machines lock for default-k8s-diff-port-775000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:33.240080    5154 start.go:369] acquired machines lock for "default-k8s-diff-port-775000" in 17.947791ms
	I0911 04:39:33.240149    5154 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:33.240383    5154 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:33.251748    5154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:33.294686    5154 start.go:159] libmachine.API.Create for "default-k8s-diff-port-775000" (driver="qemu2")
	I0911 04:39:33.294726    5154 client.go:168] LocalClient.Create starting
	I0911 04:39:33.294819    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:33.294869    5154 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:33.294893    5154 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:33.294953    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:33.294988    5154 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:33.295003    5154 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:33.295478    5154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:33.435385    5154 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:33.568551    5154 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:33.568559    5154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:33.568722    5154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:33.582449    5154 main.go:141] libmachine: STDOUT: 
	I0911 04:39:33.582475    5154 main.go:141] libmachine: STDERR: 
	I0911 04:39:33.582567    5154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2 +20000M
	I0911 04:39:33.590242    5154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:33.590259    5154 main.go:141] libmachine: STDERR: 
	I0911 04:39:33.590291    5154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:33.590311    5154 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:33.590362    5154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:52:69:61:0d:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:33.592102    5154 main.go:141] libmachine: STDOUT: 
	I0911 04:39:33.592121    5154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:33.592139    5154 client.go:171] LocalClient.Create took 297.4085ms
	I0911 04:39:35.594454    5154 start.go:128] duration metric: createHost completed in 2.354011083s
	I0911 04:39:35.594537    5154 start.go:83] releasing machines lock for "default-k8s-diff-port-775000", held for 2.354434417s
	W0911 04:39:35.594917    5154 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:35.607438    5154 out.go:177] 
	W0911 04:39:35.611572    5154 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:35.611616    5154 out.go:239] * 
	* 
	W0911 04:39:35.613829    5154 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:35.622526    5154 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-775000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (50.321709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-151000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-151000 create -f testdata/busybox.yaml: exit status 1 (29.385416ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-151000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (61.466291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (32.013125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-151000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-151000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-151000 describe deploy/metrics-server -n kube-system: exit status 1 (27.496541ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-151000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-151000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (28.388666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-151000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
E0911 04:39:32.396474    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-151000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (7.0101745s)

                                                
                                                
-- stdout --
	* [embed-certs-151000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-151000 in cluster embed-certs-151000
	* Restarting existing qemu2 VM for "embed-certs-151000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-151000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:26.292470    5182 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:26.292576    5182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:26.292579    5182 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:26.292581    5182 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:26.292688    5182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:26.293688    5182 out.go:303] Setting JSON to false
	I0911 04:39:26.308698    5182 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4140,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:26.308761    5182 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:26.312318    5182 out.go:177] * [embed-certs-151000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:26.319194    5182 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:26.319261    5182 notify.go:220] Checking for updates...
	I0911 04:39:26.327353    5182 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:26.328684    5182 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:26.331322    5182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:26.334331    5182 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:26.337337    5182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:26.340601    5182 config.go:182] Loaded profile config "embed-certs-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:26.340839    5182 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:26.345291    5182 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:39:26.352232    5182 start.go:298] selected driver: qemu2
	I0911 04:39:26.352237    5182 start.go:902] validating driver "qemu2" against &{Name:embed-certs-151000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-151000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:26.352288    5182 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:26.354185    5182 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:39:26.354211    5182 cni.go:84] Creating CNI manager for ""
	I0911 04:39:26.354217    5182 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:26.354221    5182 start_flags.go:321] config:
	{Name:embed-certs-151000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-151000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:26.358081    5182 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:26.364198    5182 out.go:177] * Starting control plane node embed-certs-151000 in cluster embed-certs-151000
	I0911 04:39:26.368233    5182 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:26.368255    5182 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:39:26.368269    5182 cache.go:57] Caching tarball of preloaded images
	I0911 04:39:26.368316    5182 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:39:26.368321    5182 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:39:26.368389    5182 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/embed-certs-151000/config.json ...
	I0911 04:39:26.368692    5182 start.go:365] acquiring machines lock for embed-certs-151000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:28.180310    5182 start.go:369] acquired machines lock for "embed-certs-151000" in 1.81158075s
	I0911 04:39:28.180521    5182 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:28.180558    5182 fix.go:54] fixHost starting: 
	I0911 04:39:28.181268    5182 fix.go:102] recreateIfNeeded on embed-certs-151000: state=Stopped err=<nil>
	W0911 04:39:28.181318    5182 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:28.190806    5182 out.go:177] * Restarting existing qemu2 VM for "embed-certs-151000" ...
	I0911 04:39:28.200496    5182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:86:6a:b5:dc:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:28.209987    5182 main.go:141] libmachine: STDOUT: 
	I0911 04:39:28.210066    5182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:28.210206    5182 fix.go:56] fixHost completed within 29.650916ms
	I0911 04:39:28.210236    5182 start.go:83] releasing machines lock for "embed-certs-151000", held for 29.89525ms
	W0911 04:39:28.210278    5182 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:28.210461    5182 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:28.210481    5182 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:33.212740    5182 start.go:365] acquiring machines lock for embed-certs-151000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:33.213157    5182 start.go:369] acquired machines lock for "embed-certs-151000" in 289.334µs
	I0911 04:39:33.213270    5182 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:33.213289    5182 fix.go:54] fixHost starting: 
	I0911 04:39:33.214685    5182 fix.go:102] recreateIfNeeded on embed-certs-151000: state=Stopped err=<nil>
	W0911 04:39:33.217595    5182 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:33.226688    5182 out.go:177] * Restarting existing qemu2 VM for "embed-certs-151000" ...
	I0911 04:39:33.230898    5182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:86:6a:b5:dc:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/embed-certs-151000/disk.qcow2
	I0911 04:39:33.239808    5182 main.go:141] libmachine: STDOUT: 
	I0911 04:39:33.239862    5182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:33.239968    5182 fix.go:56] fixHost completed within 26.679625ms
	I0911 04:39:33.239991    5182 start.go:83] releasing machines lock for "embed-certs-151000", held for 26.810458ms
	W0911 04:39:33.240210    5182 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-151000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-151000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:33.251746    5182 out.go:177] 
	W0911 04:39:33.255680    5182 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:33.255717    5182 out.go:239] * 
	* 
	W0911 04:39:33.258068    5182 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:33.267647    5182 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-151000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (50.816458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-151000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (35.181042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-151000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-151000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-151000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.401458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-151000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-151000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (32.317916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-151000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-151000 "sudo crictl images -o json": exit status 89 (156.386542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-151000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-151000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-151000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (28.088125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-151000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-151000 --alsologtostderr -v=1: exit status 89 (41.351833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-151000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:33.636439    5205 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:33.636616    5205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:33.636619    5205 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:33.636622    5205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:33.636729    5205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:33.636936    5205 out.go:303] Setting JSON to false
	I0911 04:39:33.636947    5205 mustload.go:65] Loading cluster: embed-certs-151000
	I0911 04:39:33.637106    5205 config.go:182] Loaded profile config "embed-certs-151000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:33.642557    5205 out.go:177] * The control plane node must be running for this command
	I0911 04:39:33.646863    5205 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-151000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-151000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (27.843417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (27.851625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-151000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (11.329679709s)

                                                
                                                
-- stdout --
	* [newest-cni-757000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-757000 in cluster newest-cni-757000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-757000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:34.099631    5228 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:34.099744    5228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:34.099747    5228 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:34.099750    5228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:34.099866    5228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:34.100874    5228 out.go:303] Setting JSON to false
	I0911 04:39:34.115723    5228 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4148,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:34.115785    5228 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:34.119899    5228 out.go:177] * [newest-cni-757000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:34.127089    5228 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:34.129909    5228 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:34.127171    5228 notify.go:220] Checking for updates...
	I0911 04:39:34.136002    5228 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:34.137425    5228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:34.140032    5228 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:34.143035    5228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:34.146370    5228 config.go:182] Loaded profile config "default-k8s-diff-port-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:34.146436    5228 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:34.151005    5228 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:39:34.158012    5228 start.go:298] selected driver: qemu2
	I0911 04:39:34.158016    5228 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:39:34.158022    5228 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:34.160008    5228 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0911 04:39:34.160025    5228 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0911 04:39:34.168011    5228 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:39:34.171135    5228 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0911 04:39:34.171169    5228 cni.go:84] Creating CNI manager for ""
	I0911 04:39:34.171176    5228 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:34.171182    5228 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:39:34.171187    5228 start_flags.go:321] config:
	{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:34.175428    5228 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:34.181998    5228 out.go:177] * Starting control plane node newest-cni-757000 in cluster newest-cni-757000
	I0911 04:39:34.185908    5228 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:34.185929    5228 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:39:34.185941    5228 cache.go:57] Caching tarball of preloaded images
	I0911 04:39:34.186002    5228 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:39:34.186007    5228 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:39:34.186063    5228 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/newest-cni-757000/config.json ...
	I0911 04:39:34.186075    5228 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/newest-cni-757000/config.json: {Name:mkaae5578d328c3718662f60396ea2cb041646f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:39:34.186280    5228 start.go:365] acquiring machines lock for newest-cni-757000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:35.594695    5228 start.go:369] acquired machines lock for "newest-cni-757000" in 1.408349041s
	I0911 04:39:35.594894    5228 start.go:93] Provisioning new machine with config: &{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:35.595100    5228 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:35.604500    5228 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:35.650499    5228 start.go:159] libmachine.API.Create for "newest-cni-757000" (driver="qemu2")
	I0911 04:39:35.650827    5228 client.go:168] LocalClient.Create starting
	I0911 04:39:35.650967    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:35.651024    5228 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:35.651043    5228 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:35.651116    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:35.651160    5228 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:35.651180    5228 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:35.652202    5228 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:35.781704    5228 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:35.930855    5228 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:35.930867    5228 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:35.931085    5228 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:35.945957    5228 main.go:141] libmachine: STDOUT: 
	I0911 04:39:35.945980    5228 main.go:141] libmachine: STDERR: 
	I0911 04:39:35.946051    5228 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2 +20000M
	I0911 04:39:35.960358    5228 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:35.960393    5228 main.go:141] libmachine: STDERR: 
	I0911 04:39:35.960420    5228 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:35.960430    5228 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:35.960506    5228 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:1b:42:17:0b:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:35.962677    5228 main.go:141] libmachine: STDOUT: 
	I0911 04:39:35.962692    5228 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:35.962713    5228 client.go:171] LocalClient.Create took 311.873708ms
	I0911 04:39:37.964888    5228 start.go:128] duration metric: createHost completed in 2.3697305s
	I0911 04:39:37.964961    5228 start.go:83] releasing machines lock for "newest-cni-757000", held for 2.3702365s
	W0911 04:39:37.965048    5228 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:37.982588    5228 out.go:177] * Deleting "newest-cni-757000" in qemu2 ...
	W0911 04:39:38.004649    5228 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:38.004680    5228 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:43.005894    5228 start.go:365] acquiring machines lock for newest-cni-757000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:43.019558    5228 start.go:369] acquired machines lock for "newest-cni-757000" in 13.588209ms
	I0911 04:39:43.019628    5228 start.go:93] Provisioning new machine with config: &{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:39:43.019885    5228 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:39:43.028646    5228 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:39:43.067127    5228 start.go:159] libmachine.API.Create for "newest-cni-757000" (driver="qemu2")
	I0911 04:39:43.067185    5228 client.go:168] LocalClient.Create starting
	I0911 04:39:43.067272    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/ca.pem
	I0911 04:39:43.067317    5228 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:43.067332    5228 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:43.067403    5228 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17225-951/.minikube/certs/cert.pem
	I0911 04:39:43.067434    5228 main.go:141] libmachine: Decoding PEM data...
	I0911 04:39:43.067445    5228 main.go:141] libmachine: Parsing certificate...
	I0911 04:39:43.067858    5228 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17225-951/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:39:43.194899    5228 main.go:141] libmachine: Creating SSH key...
	I0911 04:39:43.337208    5228 main.go:141] libmachine: Creating Disk image...
	I0911 04:39:43.337222    5228 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:39:43.337500    5228 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2.raw /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:43.353413    5228 main.go:141] libmachine: STDOUT: 
	I0911 04:39:43.353436    5228 main.go:141] libmachine: STDERR: 
	I0911 04:39:43.353543    5228 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2 +20000M
	I0911 04:39:43.364817    5228 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:39:43.364851    5228 main.go:141] libmachine: STDERR: 
	I0911 04:39:43.364866    5228 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:43.364875    5228 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:39:43.364948    5228 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:f7:69:cf:f1:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:43.367163    5228 main.go:141] libmachine: STDOUT: 
	I0911 04:39:43.367186    5228 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:43.367203    5228 client.go:171] LocalClient.Create took 300.013333ms
	I0911 04:39:45.369465    5228 start.go:128] duration metric: createHost completed in 2.349501833s
	I0911 04:39:45.369529    5228 start.go:83] releasing machines lock for "newest-cni-757000", held for 2.349946625s
	W0911 04:39:45.369895    5228 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:45.373960    5228 out.go:177] 
	W0911 04:39:45.380751    5228 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:45.380779    5228 out.go:239] * 
	* 
	W0911 04:39:45.383637    5228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:45.391697    5228 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (67.234416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-775000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-775000 create -f testdata/busybox.yaml: exit status 1 (29.533542ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-775000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (32.410125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (32.141833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-775000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-775000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-775000 describe deploy/metrics-server -n kube-system: exit status 1 (28.734542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-775000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-775000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (27.800875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-775000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-775000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (6.890770917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-775000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-775000 in cluster default-k8s-diff-port-775000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-775000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:36.195448    5256 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:36.195557    5256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:36.195560    5256 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:36.195563    5256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:36.195689    5256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:36.196665    5256 out.go:303] Setting JSON to false
	I0911 04:39:36.211750    5256 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4150,"bootTime":1694428226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:36.211826    5256 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:36.215556    5256 out.go:177] * [default-k8s-diff-port-775000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:36.222460    5256 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:36.222525    5256 notify.go:220] Checking for updates...
	I0911 04:39:36.226478    5256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:36.230456    5256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:36.234462    5256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:36.237485    5256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:36.240489    5256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:36.243787    5256 config.go:182] Loaded profile config "default-k8s-diff-port-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:36.244036    5256 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:36.250002    5256 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:39:36.257437    5256 start.go:298] selected driver: qemu2
	I0911 04:39:36.257443    5256 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-775000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:36.257503    5256 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:36.259528    5256 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:39:36.259559    5256 cni.go:84] Creating CNI manager for ""
	I0911 04:39:36.259566    5256 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:36.259571    5256 start_flags.go:321] config:
	{Name:default-k8s-diff-port-775000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-7750
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:36.263622    5256 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:36.271369    5256 out.go:177] * Starting control plane node default-k8s-diff-port-775000 in cluster default-k8s-diff-port-775000
	I0911 04:39:36.275534    5256 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:36.275556    5256 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:39:36.275574    5256 cache.go:57] Caching tarball of preloaded images
	I0911 04:39:36.275644    5256 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:39:36.275649    5256 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:39:36.275721    5256 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/default-k8s-diff-port-775000/config.json ...
	I0911 04:39:36.276015    5256 start.go:365] acquiring machines lock for default-k8s-diff-port-775000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:37.965228    5256 start.go:369] acquired machines lock for "default-k8s-diff-port-775000" in 1.689056041s
	I0911 04:39:37.965308    5256 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:37.965339    5256 fix.go:54] fixHost starting: 
	I0911 04:39:37.966043    5256 fix.go:102] recreateIfNeeded on default-k8s-diff-port-775000: state=Stopped err=<nil>
	W0911 04:39:37.966102    5256 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:37.975388    5256 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-775000" ...
	I0911 04:39:37.985815    5256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:52:69:61:0d:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:37.996116    5256 main.go:141] libmachine: STDOUT: 
	I0911 04:39:37.996177    5256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:37.996300    5256 fix.go:56] fixHost completed within 30.961ms
	I0911 04:39:37.996323    5256 start.go:83] releasing machines lock for "default-k8s-diff-port-775000", held for 31.063625ms
	W0911 04:39:37.996356    5256 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:37.996491    5256 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:37.996508    5256 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:42.998724    5256 start.go:365] acquiring machines lock for default-k8s-diff-port-775000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:42.999104    5256 start.go:369] acquired machines lock for "default-k8s-diff-port-775000" in 305.625µs
	I0911 04:39:42.999241    5256 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:42.999260    5256 fix.go:54] fixHost starting: 
	I0911 04:39:43.000026    5256 fix.go:102] recreateIfNeeded on default-k8s-diff-port-775000: state=Stopped err=<nil>
	W0911 04:39:43.000055    5256 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:43.005612    5256 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-775000" ...
	I0911 04:39:43.009978    5256 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:52:69:61:0d:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/default-k8s-diff-port-775000/disk.qcow2
	I0911 04:39:43.019324    5256 main.go:141] libmachine: STDOUT: 
	I0911 04:39:43.019378    5256 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:43.019465    5256 fix.go:56] fixHost completed within 20.206375ms
	I0911 04:39:43.019484    5256 start.go:83] releasing machines lock for "default-k8s-diff-port-775000", held for 20.355208ms
	W0911 04:39:43.019681    5256 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-775000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:43.035590    5256 out.go:177] 
	W0911 04:39:43.039691    5256 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:43.039721    5256 out.go:239] * 
	* 
	W0911 04:39:43.041019    5256 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:43.050490    5256 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-775000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (42.308958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-775000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (32.858417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-775000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.041167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-775000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-775000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (32.406666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-775000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-775000 "sudo crictl images -o json": exit status 89 (152.070709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-775000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-775000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-775000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (34.868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-775000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-775000 --alsologtostderr -v=1: exit status 89 (39.123541ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-775000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:43.412602    5279 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:43.412754    5279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:43.412757    5279 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:43.412759    5279 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:43.412880    5279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:43.413094    5279 out.go:303] Setting JSON to false
	I0911 04:39:43.413102    5279 mustload.go:65] Loading cluster: default-k8s-diff-port-775000
	I0911 04:39:43.413273    5279 config.go:182] Loaded profile config "default-k8s-diff-port-775000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:43.417600    5279 out.go:177] * The control plane node must be running for this command
	I0911 04:39:43.420721    5279 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-775000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-775000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (27.890917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (27.744584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.170330291s)

                                                
                                                
-- stdout --
	* [newest-cni-757000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-757000 in cluster newest-cni-757000
	* Restarting existing qemu2 VM for "newest-cni-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:45.711566    5312 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:45.711678    5312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:45.711681    5312 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:45.711684    5312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:45.711796    5312 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:45.712750    5312 out.go:303] Setting JSON to false
	I0911 04:39:45.728902    5312 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4159,"bootTime":1694428226,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:39:45.728964    5312 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:39:45.734142    5312 out.go:177] * [newest-cni-757000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:39:45.741112    5312 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:39:45.741146    5312 notify.go:220] Checking for updates...
	I0911 04:39:45.745026    5312 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:39:45.748053    5312 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:39:45.751091    5312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:39:45.754060    5312 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:39:45.755411    5312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:39:45.758261    5312 config.go:182] Loaded profile config "newest-cni-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:45.758497    5312 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:39:45.763077    5312 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:39:45.767975    5312 start.go:298] selected driver: qemu2
	I0911 04:39:45.767980    5312 start.go:902] validating driver "qemu2" against &{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-757000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:45.768034    5312 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:39:45.770032    5312 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0911 04:39:45.770056    5312 cni.go:84] Creating CNI manager for ""
	I0911 04:39:45.770064    5312 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:39:45.770070    5312 start_flags.go:321] config:
	{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-757000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:39:45.774125    5312 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:39:45.780985    5312 out.go:177] * Starting control plane node newest-cni-757000 in cluster newest-cni-757000
	I0911 04:39:45.785008    5312 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:39:45.785029    5312 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:39:45.785039    5312 cache.go:57] Caching tarball of preloaded images
	I0911 04:39:45.785097    5312 preload.go:174] Found /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:39:45.785102    5312 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:39:45.785166    5312 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/newest-cni-757000/config.json ...
	I0911 04:39:45.785534    5312 start.go:365] acquiring machines lock for newest-cni-757000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:45.785568    5312 start.go:369] acquired machines lock for "newest-cni-757000" in 28.375µs
	I0911 04:39:45.785578    5312 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:45.785584    5312 fix.go:54] fixHost starting: 
	I0911 04:39:45.785714    5312 fix.go:102] recreateIfNeeded on newest-cni-757000: state=Stopped err=<nil>
	W0911 04:39:45.785722    5312 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:45.790012    5312 out.go:177] * Restarting existing qemu2 VM for "newest-cni-757000" ...
	I0911 04:39:45.798066    5312 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:f7:69:cf:f1:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:45.799921    5312 main.go:141] libmachine: STDOUT: 
	I0911 04:39:45.799940    5312 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:45.799970    5312 fix.go:56] fixHost completed within 14.387334ms
	I0911 04:39:45.799974    5312 start.go:83] releasing machines lock for "newest-cni-757000", held for 14.401792ms
	W0911 04:39:45.799980    5312 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:45.800031    5312 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:45.800035    5312 start.go:687] Will try again in 5 seconds ...
	I0911 04:39:50.802274    5312 start.go:365] acquiring machines lock for newest-cni-757000: {Name:mk566ae5c3a47b82fb2ec8c89cbc5c1134299e3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:39:50.802709    5312 start.go:369] acquired machines lock for "newest-cni-757000" in 348.291µs
	I0911 04:39:50.802862    5312 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:39:50.802884    5312 fix.go:54] fixHost starting: 
	I0911 04:39:50.803582    5312 fix.go:102] recreateIfNeeded on newest-cni-757000: state=Stopped err=<nil>
	W0911 04:39:50.803611    5312 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:39:50.807994    5312 out.go:177] * Restarting existing qemu2 VM for "newest-cni-757000" ...
	I0911 04:39:50.811336    5312 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:f7:69:cf:f1:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17225-951/.minikube/machines/newest-cni-757000/disk.qcow2
	I0911 04:39:50.820140    5312 main.go:141] libmachine: STDOUT: 
	I0911 04:39:50.820225    5312 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:39:50.820322    5312 fix.go:56] fixHost completed within 17.442125ms
	I0911 04:39:50.820337    5312 start.go:83] releasing machines lock for "newest-cni-757000", held for 17.607584ms
	W0911 04:39:50.820564    5312 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:39:50.828014    5312 out.go:177] 
	W0911 04:39:50.831921    5312 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:39:50.831946    5312 out.go:239] * 
	* 
	W0911 04:39:50.834620    5312 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:39:50.842950    5312 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (67.127125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-757000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-757000 "sudo crictl images -o json": exit status 89 (43.304584ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-757000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-757000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-757000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (28.662375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-757000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-757000 --alsologtostderr -v=1: exit status 89 (39.8655ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-757000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:39:51.023281    5329 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:39:51.023428    5329 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:51.023431    5329 out.go:309] Setting ErrFile to fd 2...
	I0911 04:39:51.023433    5329 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:39:51.023552    5329 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:39:51.023760    5329 out.go:303] Setting JSON to false
	I0911 04:39:51.023767    5329 mustload.go:65] Loading cluster: newest-cni-757000
	I0911 04:39:51.023937    5329 config.go:182] Loaded profile config "newest-cni-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:39:51.028037    5329 out.go:177] * The control plane node must be running for this command
	I0911 04:39:51.031976    5329 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-757000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-757000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (28.758292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (28.497375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (105/248)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.1/json-events 11.15
11 TestDownloadOnly/v1.28.1/preload-exists 0
14 TestDownloadOnly/v1.28.1/kubectl 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.24
19 TestBinaryMirror 0.39
30 TestHyperKitDriverInstallOrUpdate 8.34
33 TestErrorSpam/setup 29.06
34 TestErrorSpam/start 0.34
35 TestErrorSpam/status 0.25
36 TestErrorSpam/pause 0.63
37 TestErrorSpam/unpause 0.61
38 TestErrorSpam/stop 3.23
41 TestFunctional/serial/CopySyncFile 0
43 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/CacheCmd/cache/add_remote 361
50 TestFunctional/serial/CacheCmd/cache/add_local 60.4
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
52 TestFunctional/serial/CacheCmd/cache/list 0.03
55 TestFunctional/serial/CacheCmd/cache/delete 0.1
60 TestFunctional/serial/LogsCmd 180.56
64 TestFunctional/parallel/ConfigCmd 0.2
66 TestFunctional/parallel/DryRun 0.27
67 TestFunctional/parallel/InternationalLanguage 0.1
73 TestFunctional/parallel/AddonsCmd 0.12
76 TestFunctional/parallel/SSHCmd 0.13
77 TestFunctional/parallel/CpCmd 0.29
79 TestFunctional/parallel/FileSync 0.07
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
88 TestFunctional/parallel/License 0.19
89 TestFunctional/parallel/Version/short 0.03
90 TestFunctional/parallel/Version/components 0.16
96 TestFunctional/parallel/ImageCommands/Setup 1.47
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
120 TestFunctional/parallel/ImageCommands/ImageRemove 120.29
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
123 TestFunctional/parallel/ProfileCmd/profile_list 0.14
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
126 TestFunctional/parallel/MountCmd/specific-port 0.84
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 60.13
130 TestFunctional/delete_addon-resizer_images 0.18
131 TestFunctional/delete_my-image_image 0.04
132 TestFunctional/delete_minikube_cached_images 0.04
136 TestImageBuild/serial/Setup 30.16
137 TestImageBuild/serial/NormalBuild 1.04
139 TestImageBuild/serial/BuildWithDockerIgnore 0.17
140 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.11
143 TestIngressAddonLegacy/StartLegacyK8sCluster 65.22
145 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.34
146 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.24
150 TestJSONOutput/start/Command 44.83
151 TestJSONOutput/start/Audit 0
153 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/pause/Command 0.3
157 TestJSONOutput/pause/Audit 0
159 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/unpause/Command 0.23
163 TestJSONOutput/unpause/Audit 0
165 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/stop/Command 12.08
169 TestJSONOutput/stop/Audit 0
171 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
173 TestErrorJSONOutput 0.32
178 TestMainNoArgs 0.03
182 TestMountStart/serial/StartWithMountFirst 17.12
183 TestMountStart/serial/VerifyMountFirst 0.19
184 TestMountStart/serial/StartWithMountSecond 18.53
185 TestMountStart/serial/VerifyMountSecond 0.21
186 TestMountStart/serial/DeleteFirst 0.09
190 TestMultiNode/serial/FreshStart2Nodes 98.12
191 TestMultiNode/serial/DeployApp2Nodes 3.8
192 TestMultiNode/serial/PingHostFrom2Pods 0.54
193 TestMultiNode/serial/AddNode 36.57
194 TestMultiNode/serial/ProfileList 0.17
195 TestMultiNode/serial/CopyFile 2.49
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
244 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
245 TestNoKubernetes/serial/ProfileList 0.14
246 TestNoKubernetes/serial/Stop 0.06
248 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
266 TestStartStop/group/old-k8s-version/serial/Stop 0.06
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
277 TestStartStop/group/no-preload/serial/Stop 0.06
278 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
288 TestStartStop/group/embed-certs/serial/Stop 0.07
289 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
308 TestStartStop/group/newest-cni/serial/Stop 0.06
309 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
311 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-074000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-074000: exit status 85 (97.160125ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-074000 | jenkins | v1.31.2 | 11 Sep 23 03:33 PDT |          |
	|         | -p download-only-074000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:33:27
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:33:27.541399    1395 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:33:27.541522    1395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:27.541524    1395 out.go:309] Setting ErrFile to fd 2...
	I0911 03:33:27.541527    1395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:27.541656    1395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	W0911 03:33:27.541728    1395 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17225-951/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17225-951/.minikube/config/config.json: no such file or directory
	I0911 03:33:27.542878    1395 out.go:303] Setting JSON to true
	I0911 03:33:27.559195    1395 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":181,"bootTime":1694428226,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 03:33:27.559253    1395 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:33:27.567860    1395 out.go:97] [download-only-074000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	W0911 03:33:27.568001    1395 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball: no such file or directory
	I0911 03:33:27.571797    1395 out.go:169] MINIKUBE_LOCATION=17225
	I0911 03:33:27.568022    1395 notify.go:220] Checking for updates...
	I0911 03:33:27.582759    1395 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 03:33:27.585798    1395 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:33:27.588778    1395 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:33:27.591822    1395 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	W0911 03:33:27.595791    1395 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 03:33:27.596014    1395 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:33:27.600809    1395 out.go:97] Using the qemu2 driver based on user configuration
	I0911 03:33:27.600815    1395 start.go:298] selected driver: qemu2
	I0911 03:33:27.600817    1395 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:33:27.600857    1395 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:33:27.604835    1395 out.go:169] Automatically selected the socket_vmnet network
	I0911 03:33:27.610284    1395 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0911 03:33:27.610375    1395 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 03:33:27.610440    1395 cni.go:84] Creating CNI manager for ""
	I0911 03:33:27.610456    1395 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 03:33:27.610462    1395 start_flags.go:321] config:
	{Name:download-only-074000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-074000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:33:27.615973    1395 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:33:27.619818    1395 out.go:97] Downloading VM boot image ...
	I0911 03:33:27.619835    1395 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0911 03:33:33.399441    1395 out.go:97] Starting control plane node download-only-074000 in cluster download-only-074000
	I0911 03:33:33.399465    1395 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:33:33.455507    1395 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:33:33.455595    1395 cache.go:57] Caching tarball of preloaded images
	I0911 03:33:33.455753    1395 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:33:33.461871    1395 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0911 03:33:33.461876    1395 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:33.547024    1395 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:33:39.783965    1395 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:39.784104    1395 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:40.424308    1395 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 03:33:40.424496    1395 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/download-only-074000/config.json ...
	I0911 03:33:40.424514    1395 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/download-only-074000/config.json: {Name:mka4b0a642bec3408aafe4290f6afa7a17904e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:33:40.424733    1395 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:33:40.424908    1395 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0911 03:33:40.786850    1395 out.go:169] 
	W0911 03:33:40.791862    1395 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68 0x104a99f68] Decompressors:map[bz2:0x1400011e588 gz:0x1400011e5e0 tar:0x1400011e590 tar.bz2:0x1400011e5a0 tar.gz:0x1400011e5b0 tar.xz:0x1400011e5c0 tar.zst:0x1400011e5d0 tbz2:0x1400011e5a0 tgz:0x1400011e5b0 txz:0x1400011e5c0 tzst:0x1400011e5d0 xz:0x1400011e5e8 zip:0x1400011e5f0 zst:0x1400011e600] Getters:map[file:0x140005cc5b0 http:0x14000dca640 https:0x14000dca690] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0911 03:33:40.791889    1395 out_reason.go:110] 
	W0911 03:33:40.797780    1395 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:33:40.801791    1395 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-074000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (11.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-074000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-074000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 : (11.147778834s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (11.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
--- PASS: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-074000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-074000: exit status 85 (74.757542ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-074000 | jenkins | v1.31.2 | 11 Sep 23 03:33 PDT |          |
	|         | -p download-only-074000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-074000 | jenkins | v1.31.2 | 11 Sep 23 03:33 PDT |          |
	|         | -p download-only-074000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:33:40
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:33:40.987966    1406 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:33:40.988072    1406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:40.988075    1406 out.go:309] Setting ErrFile to fd 2...
	I0911 03:33:40.988077    1406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:33:40.988195    1406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	W0911 03:33:40.988265    1406 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17225-951/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17225-951/.minikube/config/config.json: no such file or directory
	I0911 03:33:40.989168    1406 out.go:303] Setting JSON to true
	I0911 03:33:41.003953    1406 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":194,"bootTime":1694428226,"procs":383,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 03:33:41.004020    1406 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:33:41.007788    1406 out.go:97] [download-only-074000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:33:41.011761    1406 out.go:169] MINIKUBE_LOCATION=17225
	I0911 03:33:41.007882    1406 notify.go:220] Checking for updates...
	I0911 03:33:41.017803    1406 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 03:33:41.020758    1406 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:33:41.023766    1406 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:33:41.026783    1406 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	W0911 03:33:41.031151    1406 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 03:33:41.031405    1406 config.go:182] Loaded profile config "download-only-074000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0911 03:33:41.031425    1406 start.go:810] api.Load failed for download-only-074000: filestore "download-only-074000": Docker machine "download-only-074000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0911 03:33:41.031466    1406 driver.go:373] Setting default libvirt URI to qemu:///system
	W0911 03:33:41.031479    1406 start.go:810] api.Load failed for download-only-074000: filestore "download-only-074000": Docker machine "download-only-074000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0911 03:33:41.034705    1406 out.go:97] Using the qemu2 driver based on existing profile
	I0911 03:33:41.034711    1406 start.go:298] selected driver: qemu2
	I0911 03:33:41.034713    1406 start.go:902] validating driver "qemu2" against &{Name:download-only-074000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-074000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:33:41.036581    1406 cni.go:84] Creating CNI manager for ""
	I0911 03:33:41.036593    1406 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:33:41.036600    1406 start_flags.go:321] config:
	{Name:download-only-074000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-074000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:33:41.040380    1406 iso.go:125] acquiring lock: {Name:mk940869f3ea2950f9d7f9f25946c8b774cc6054 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:33:41.043792    1406 out.go:97] Starting control plane node download-only-074000 in cluster download-only-074000
	I0911 03:33:41.043799    1406 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:33:41.099504    1406 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:33:41.099526    1406 cache.go:57] Caching tarball of preloaded images
	I0911 03:33:41.099727    1406 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:33:41.104792    1406 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0911 03:33:41.104805    1406 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:41.184575    1406 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:33:46.044970    1406 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:46.045144    1406 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17225-951/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:33:46.624893    1406 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 03:33:46.624966    1406 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/download-only-074000/config.json ...
	I0911 03:33:46.625227    1406 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:33:46.625385    1406 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17225-951/.minikube/cache/darwin/arm64/v1.28.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-074000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-074000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.39s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-427000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-427000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-427000
--- PASS: TestBinaryMirror (0.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.34s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.34s)

                                                
                                    
x
+
TestErrorSpam/setup (29.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-002000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-002000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 --driver=qemu2 : (29.061946959s)
--- PASS: TestErrorSpam/setup (29.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 stop: (3.067594708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-002000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-002000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17225-951/.minikube/files/etc/test/nested/copy/1393/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (361s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 cache add registry.k8s.io/pause:3.1: (2m0.007908s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 cache add registry.k8s.io/pause:3.3: (2m0.493699833s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 cache add registry.k8s.io/pause:latest: (2m0.492378083s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (361.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3100911718/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cache add minikube-local-cache-test:functional-942000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 cache add minikube-local-cache-test:functional-942000: (59.995680708s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cache delete minikube-local-cache-test:functional-942000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-942000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (180.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 logs: (3m0.562825083s)
--- PASS: TestFunctional/serial/LogsCmd (180.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 config get cpus: exit status 14 (28.73125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 config get cpus: exit status 14 (28.007166ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-942000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-942000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (159.820583ms)

                                                
                                                
-- stdout --
	* [functional-942000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:07:21.252852    2335 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:07:21.253075    2335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:21.253080    2335 out.go:309] Setting ErrFile to fd 2...
	I0911 04:07:21.253084    2335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:21.253260    2335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:07:21.254709    2335 out.go:303] Setting JSON to false
	I0911 04:07:21.274596    2335 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2215,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:07:21.274673    2335 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:07:21.279733    2335 out.go:177] * [functional-942000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:07:21.286692    2335 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:07:21.286716    2335 notify.go:220] Checking for updates...
	I0911 04:07:21.290676    2335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:07:21.294723    2335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:07:21.298633    2335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:07:21.300149    2335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:07:21.303651    2335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:07:21.306941    2335 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:07:21.307235    2335 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:07:21.311514    2335 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:07:21.322669    2335 start.go:298] selected driver: qemu2
	I0911 04:07:21.322677    2335 start.go:902] validating driver "qemu2" against &{Name:functional-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-942000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:07:21.322748    2335 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:07:21.329664    2335 out.go:177] 
	W0911 04:07:21.333669    2335 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0911 04:07:21.336644    2335 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-942000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-942000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-942000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (101.053791ms)

                                                
                                                
-- stdout --
	* [functional-942000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:07:21.473649    2346 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:07:21.473751    2346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:21.473754    2346 out.go:309] Setting ErrFile to fd 2...
	I0911 04:07:21.473757    2346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:21.473889    2346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17225-951/.minikube/bin
	I0911 04:07:21.475332    2346 out.go:303] Setting JSON to false
	I0911 04:07:21.490783    2346 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2215,"bootTime":1694428226,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0911 04:07:21.490872    2346 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:07:21.494725    2346 out.go:177] * [functional-942000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0911 04:07:21.501676    2346 out.go:177]   - MINIKUBE_LOCATION=17225
	I0911 04:07:21.501730    2346 notify.go:220] Checking for updates...
	I0911 04:07:21.505644    2346 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	I0911 04:07:21.508615    2346 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:07:21.511684    2346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:07:21.514647    2346 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	I0911 04:07:21.517649    2346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:07:21.520966    2346 config.go:182] Loaded profile config "functional-942000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:07:21.521186    2346 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:07:21.525625    2346 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0911 04:07:21.532657    2346 start.go:298] selected driver: qemu2
	I0911 04:07:21.532664    2346 start.go:902] validating driver "qemu2" against &{Name:functional-942000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-942000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:07:21.532729    2346 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:07:21.538655    2346 out.go:177] 
	W0911 04:07:21.542619    2346 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0911 04:07:21.545646    2346 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh -n functional-942000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 cp functional-942000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2653476293/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh -n functional-942000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1393/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo cat /etc/test/nested/copy/1393/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "sudo systemctl is-active crio": exit status 1 (167.860417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.415253916s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-942000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.01166675s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image rm gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image rm gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr: (1m0.146132458s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image ls
functional_test.go:447: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image ls: (1m0.14656925s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-942000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "104.018667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.681792ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "110.765292ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "31.61475ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port672461811/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (67.477875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17225-951/.minikube/machines/functional-942000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_69dae3a3a69f4f5619d96bab1ad4fd540043798b_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port672461811/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "sudo umount -f /mount-9p": exit status 1 (69.594333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-942000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port672461811/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-942000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 image save --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-arm64 -p functional-942000 image save --daemon gcr.io/google-containers/addon-resizer:functional-942000 --alsologtostderr: (1m0.007242083s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-942000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.13s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-942000
--- PASS: TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-942000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-942000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-094000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-094000 --driver=qemu2 : (30.1553355s)
--- PASS: TestImageBuild/serial/Setup (30.16s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-094000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-094000: (1.037974792s)
--- PASS: TestImageBuild/serial/NormalBuild (1.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-094000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-094000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (65.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-131000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-131000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m5.223143583s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (65.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 addons enable ingress --alsologtostderr -v=5: (16.337950917s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-131000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-435000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-435000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (44.831936708s)
--- PASS: TestJSONOutput/start/Command (44.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.3s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-435000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-435000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-435000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-435000 --output=json --user=testUser: (12.07658925s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-936000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-936000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.800208ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e0098faa-af47-4b02-b0ae-f018275f4239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-936000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8bff7f65-a5df-4c53-8270-65d0ac718bca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17225"}}
	{"specversion":"1.0","id":"3975dc29-6fc0-420e-bca7-7e9c4663c801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig"}}
	{"specversion":"1.0","id":"d2a7ff37-279b-49c4-ad2d-461c268a243d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"390fd559-91ce-416d-92eb-ae1395abfd23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10cc14f0-c317-4ab0-8a79-5bfef1cdb636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube"}}
	{"specversion":"1.0","id":"022853fa-39ab-45f7-b9be-d008da6d9de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b9a88189-7674-4b6f-9bc9-80f48bf420e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-936000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-936000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (17.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-255000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-arm64 start -p mount-start-1-255000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : (16.119806625s)
--- PASS: TestMountStart/serial/StartWithMountFirst (17.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-1-255000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-1-255000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-2-256000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-arm64 start -p mount-start-2-256000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=qemu2 : (17.532746667s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.21s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-256000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-arm64 -p mount-start-2-256000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.21s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.09s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 delete -p mount-start-1-255000 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-705000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0911 04:19:32.397237    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:32.403962    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:32.416009    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:32.438039    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:32.479160    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:32.561219    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:32.723302    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:33.045362    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:33.687458    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:34.969597    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:37.531712    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:42.653812    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:19:52.895905    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
E0911 04:20:13.377992    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-arm64 start -p multinode-705000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : (1m38.004802834s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-arm64 kubectl -p multinode-705000 -- rollout status deployment/busybox: (2.635021333s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-9b265 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-bg54d -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-9b265 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-bg54d -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-9b265 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-bg54d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-9b265 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-9b265 -- sh -c "ping -c 1 192.168.105.1"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-bg54d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-705000 -- exec busybox-5bc68d56bd-bg54d -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.54s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (36.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-705000 -v 3 --alsologtostderr
E0911 04:20:54.340097    1393 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17225-951/.minikube/profiles/ingress-addon-legacy-131000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-arm64 node add -p multinode-705000 -v 3 --alsologtostderr: (36.399719667s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (36.57s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp testdata/cp-test.txt multinode-705000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile1643054198/001/cp-test_multinode-705000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000:/home/docker/cp-test.txt multinode-705000-m02:/home/docker/cp-test_multinode-705000_multinode-705000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m02 "sudo cat /home/docker/cp-test_multinode-705000_multinode-705000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000:/home/docker/cp-test.txt multinode-705000-m03:/home/docker/cp-test_multinode-705000_multinode-705000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m03 "sudo cat /home/docker/cp-test_multinode-705000_multinode-705000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp testdata/cp-test.txt multinode-705000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile1643054198/001/cp-test_multinode-705000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000-m02:/home/docker/cp-test.txt multinode-705000:/home/docker/cp-test_multinode-705000-m02_multinode-705000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000 "sudo cat /home/docker/cp-test_multinode-705000-m02_multinode-705000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000-m02:/home/docker/cp-test.txt multinode-705000-m03:/home/docker/cp-test_multinode-705000-m02_multinode-705000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m03 "sudo cat /home/docker/cp-test_multinode-705000-m02_multinode-705000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp testdata/cp-test.txt multinode-705000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiNodeserialCopyFile1643054198/001/cp-test_multinode-705000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000-m03:/home/docker/cp-test.txt multinode-705000:/home/docker/cp-test_multinode-705000-m03_multinode-705000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000 "sudo cat /home/docker/cp-test_multinode-705000-m03_multinode-705000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 cp multinode-705000-m03:/home/docker/cp-test.txt multinode-705000-m02:/home/docker/cp-test_multinode-705000-m03_multinode-705000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-705000 ssh -n multinode-705000-m02 "sudo cat /home/docker/cp-test_multinode-705000-m03_multinode-705000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-980000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (82.704958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-980000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17225
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17225-951/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17225-951/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.319458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-980000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-980000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-980000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (40.356875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-980000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-011000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-011000 -n old-k8s-version-011000: exit status 7 (29.471917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-011000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-581000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (27.981709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-581000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-151000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-151000 -n embed-certs-151000: exit status 7 (27.30825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-151000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-775000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-775000 -n default-k8s-diff-port-775000: exit status 7 (27.922792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-775000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-757000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-757000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (28.581292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-757000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/248)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4259139690/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4259139690/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4259139690/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (76.447333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (107.628292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (111.258791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (109.382875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (107.921084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (110.002208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (107.556167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-942000 ssh "findmnt -T" /mount1: exit status 1 (108.356375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4259139690/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4259139690/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-942000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4259139690/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.57s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-838000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                
----------------------- debugLogs end: cilium-838000 [took: 2.329256708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-838000
--- SKIP: TestNetworkPlugins/group/cilium (2.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-375000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-375000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard